Feb 18 00:08:36 crc systemd[1]: Starting Kubernetes Kubelet... Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 18 00:08:36 crc kubenswrapper[5121]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.928159 5121 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938252 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938309 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938318 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938326 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938335 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938346 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938354 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938362 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938369 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938377 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938384 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938391 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938399 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938407 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938417 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938428 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938452 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938461 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938470 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938477 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938485 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938493 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938500 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938508 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938545 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938554 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938562 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938569 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938576 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938584 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938594 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938604 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938616 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938627 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938683 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938697 5121 feature_gate.go:328] unrecognized feature gate: Example Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938711 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938722 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938732 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938742 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938751 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938759 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938766 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938773 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938781 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938789 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938796 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938803 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938810 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938817 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938825 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938832 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938840 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938848 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938858 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938865 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938874 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938883 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938892 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938901 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938910 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938919 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938928 5121 feature_gate.go:328] unrecognized feature gate: Example2 Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938937 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938945 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938956 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938964 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938973 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938982 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.938995 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939004 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939013 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939021 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939028 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939035 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939042 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939049 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939058 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939068 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939076 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939083 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939090 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939096 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939104 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939112 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.939119 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940066 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940079 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940087 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940095 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940103 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940111 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940118 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940125 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940132 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940139 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940147 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940154 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940161 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940168 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940175 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940183 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940189 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940197 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940203 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940211 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940218 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940225 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940233 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940240 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940249 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940257 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940264 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940271 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940278 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940285 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940293 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940300 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940308 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940315 5121 feature_gate.go:328] unrecognized feature gate: Example Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940323 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940330 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940337 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940344 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940351 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940358 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940365 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940373 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940380 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940387 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940395 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940403 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940412 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940419 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940426 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940434 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940442 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940450 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940457 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940464 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940472 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940479 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940486 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940494 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940501 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940509 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940516 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940523 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940533 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940542 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940550 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940557 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940565 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940572 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940579 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940586 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940595 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940602 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940609 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940616 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940624 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940631 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940638 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940645 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940689 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940696 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940703 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940713 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940722 5121 feature_gate.go:328] unrecognized feature gate: Example2 Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940730 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940737 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.940744 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941732 5121 flags.go:64] FLAG: --address="0.0.0.0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941756 5121 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941771 5121 flags.go:64] FLAG: --anonymous-auth="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941781 5121 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941794 5121 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941808 5121 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941819 5121 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941829 5121 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941838 5121 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941846 5121 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941858 5121 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941867 5121 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941875 5121 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941883 5121 flags.go:64] FLAG: --cgroup-root="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941891 5121 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941899 5121 flags.go:64] FLAG: --client-ca-file="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941908 5121 flags.go:64] FLAG: --cloud-config="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941916 5121 flags.go:64] FLAG: --cloud-provider="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941923 5121 flags.go:64] FLAG: --cluster-dns="[]" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941934 5121 flags.go:64] FLAG: --cluster-domain="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941942 5121 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941951 5121 flags.go:64] FLAG: --config-dir="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941959 5121 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941968 5121 flags.go:64] FLAG: --container-log-max-files="5" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941977 5121 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941986 5121 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.941994 5121 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942002 5121 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942011 5121 flags.go:64] FLAG: --contention-profiling="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942019 5121 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942026 5121 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942035 5121 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942043 5121 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942053 5121 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942061 5121 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942069 5121 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942077 5121 flags.go:64] FLAG: --enable-load-reader="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942091 5121 flags.go:64] FLAG: --enable-server="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942098 5121 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942108 5121 flags.go:64] FLAG: --event-burst="100" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942116 5121 flags.go:64] FLAG: --event-qps="50" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942124 5121 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942135 5121 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942144 5121 flags.go:64] FLAG: --eviction-hard="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942153 5121 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942161 5121 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942169 5121 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942177 5121 flags.go:64] FLAG: --eviction-soft="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942185 5121 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942193 5121 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942202 5121 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942211 5121 flags.go:64] FLAG: --experimental-mounter-path="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942219 5121 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942227 5121 flags.go:64] FLAG: --fail-swap-on="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942235 5121 flags.go:64] FLAG: --feature-gates="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942244 5121 flags.go:64] FLAG: --file-check-frequency="20s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942252 5121 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942261 5121 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942269 5121 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942277 5121 flags.go:64] FLAG: --healthz-port="10248" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942285 5121 flags.go:64] FLAG: --help="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942293 5121 flags.go:64] FLAG: --hostname-override="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942301 5121 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942309 5121 flags.go:64] FLAG: --http-check-frequency="20s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942317 5121 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942324 5121 flags.go:64] FLAG: --image-credential-provider-config="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942332 5121 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942340 5121 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942348 5121 flags.go:64] FLAG: --image-service-endpoint="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942359 5121 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942369 5121 flags.go:64] FLAG: --kube-api-burst="100" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942377 5121 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942386 5121 flags.go:64] FLAG: --kube-api-qps="50" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942393 5121 flags.go:64] FLAG: --kube-reserved="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942405 5121 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942412 5121 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942420 5121 flags.go:64] FLAG: --kubelet-cgroups="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942428 5121 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942436 5121 flags.go:64] FLAG: --lock-file="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942444 5121 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942452 5121 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942460 5121 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942473 5121 flags.go:64] FLAG: --log-json-split-stream="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942481 5121 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942489 5121 flags.go:64] FLAG: --log-text-split-stream="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942497 5121 flags.go:64] FLAG: --logging-format="text" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942504 5121 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942513 5121 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942521 5121 flags.go:64] FLAG: --manifest-url="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942528 5121 flags.go:64] FLAG: --manifest-url-header="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942539 5121 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942548 5121 flags.go:64] FLAG: --max-open-files="1000000" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942557 5121 flags.go:64] FLAG: --max-pods="110" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942566 5121 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942574 5121 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942582 5121 flags.go:64] FLAG: --memory-manager-policy="None" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942589 5121 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942597 5121 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942606 5121 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942613 5121 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942634 5121 flags.go:64] FLAG: --node-status-max-images="50" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942645 5121 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942690 5121 flags.go:64] FLAG: --oom-score-adj="-999" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942700 5121 flags.go:64] FLAG: --pod-cidr="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942708 5121 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942721 5121 flags.go:64] FLAG: --pod-manifest-path="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942729 5121 flags.go:64] FLAG: --pod-max-pids="-1" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942744 5121 flags.go:64] FLAG: --pods-per-core="0" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942752 5121 flags.go:64] FLAG: --port="10250" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942760 5121 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942768 5121 flags.go:64] FLAG: --provider-id="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942776 5121 flags.go:64] FLAG: --qos-reserved="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942784 5121 flags.go:64] FLAG: --read-only-port="10255" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942792 5121 flags.go:64] FLAG: --register-node="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942800 5121 flags.go:64] FLAG: --register-schedulable="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942809 5121 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942822 5121 flags.go:64] FLAG: --registry-burst="10" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942830 5121 flags.go:64] FLAG: --registry-qps="5" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942837 5121 flags.go:64] FLAG: --reserved-cpus="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942845 5121 flags.go:64] FLAG: --reserved-memory="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942854 5121 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942862 5121 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942870 5121 flags.go:64] FLAG: --rotate-certificates="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942878 5121 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942907 5121 flags.go:64] FLAG: --runonce="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942914 5121 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942923 5121 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942932 5121 flags.go:64] FLAG: --seccomp-default="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942940 5121 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942948 5121 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942957 5121 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942968 5121 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.942984 5121 flags.go:64] FLAG: --storage-driver-password="root" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943004 5121 flags.go:64] FLAG: --storage-driver-secure="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943017 5121 flags.go:64] FLAG: --storage-driver-table="stats" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943027 5121 flags.go:64] FLAG: --storage-driver-user="root" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943056 5121 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943068 5121 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943078 5121 flags.go:64] FLAG: --system-cgroups="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943096 5121 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943115 5121 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943126 5121 flags.go:64] FLAG: --tls-cert-file="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943136 5121 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943149 5121 flags.go:64] FLAG: --tls-min-version="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943157 5121 flags.go:64] FLAG: --tls-private-key-file="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943165 5121 flags.go:64] FLAG: --topology-manager-policy="none" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943173 5121 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943181 5121 flags.go:64] FLAG: --topology-manager-scope="container" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943189 5121 flags.go:64] FLAG: --v="2" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943211 5121 flags.go:64] FLAG: --version="false" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943222 5121 flags.go:64] FLAG: --vmodule="" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943232 5121 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.943241 5121 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943426 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943439 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943449 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943459 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943468 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943476 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943484 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943491 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943499 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943506 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943513 5121 feature_gate.go:328] unrecognized feature gate: Example2 Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943520 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943528 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943536 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943543 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943551 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943559 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943566 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943575 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943583 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943591 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943598 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943605 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943612 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943619 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943627 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943634 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943641 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943685 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943695 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943705 5121 feature_gate.go:328] unrecognized feature gate: Example Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943713 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943720 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943728 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943735 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943751 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943758 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943766 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943773 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943780 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943796 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943804 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943811 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943818 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943825 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943833 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943849 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943857 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943865 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943873 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943881 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943888 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943900 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943907 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943914 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943922 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943929 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943937 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943944 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943952 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943960 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943967 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943975 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943983 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943989 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.943997 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944004 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944011 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944018 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944025 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944032 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944039 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944047 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944054 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944061 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944068 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944076 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944084 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944091 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944100 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944107 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944115 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944123 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944130 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944140 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.944149 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.946589 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.960986 5121 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.961027 5121 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961128 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961137 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961142 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961147 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961152 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961156 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961160 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961165 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961169 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961173 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961177 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961182 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961187 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961193 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961199 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961204 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961209 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961214 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961219 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961225 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961231 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961236 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961239 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961243 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961246 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961252 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961255 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961259 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961263 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961266 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961269 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961273 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961276 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961280 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961283 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961286 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961290 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961293 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961297 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961301 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961304 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961308 5121 feature_gate.go:328] unrecognized feature gate: Example2 Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961312 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961315 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961319 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961322 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961326 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961329 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961333 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961336 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961342 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961346 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961349 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961352 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961355 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961373 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961377 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961381 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961385 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961389 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961392 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961396 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961399 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961402 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961406 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961409 5121 feature_gate.go:328] unrecognized feature gate: Example Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961413 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961416 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961419 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961422 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961427 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961432 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961435 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961439 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961442 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961446 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961450 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961453 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961457 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961460 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961463 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961467 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961473 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961477 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961481 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961484 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.961490 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961625 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961631 5121 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961634 5121 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961638 5121 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961642 5121 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961646 5121 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961661 5121 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961664 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961668 5121 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961671 5121 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961675 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961678 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961682 5121 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961686 5121 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961689 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961692 5121 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961696 5121 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961699 5121 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961704 5121 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961708 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961713 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961717 5121 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961721 5121 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961724 5121 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961728 5121 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961731 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961735 5121 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961738 5121 feature_gate.go:328] unrecognized feature gate: Example Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961742 5121 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961745 5121 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961749 5121 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961752 5121 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961755 5121 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961758 5121 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961761 5121 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961765 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961768 5121 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961772 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961775 5121 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961779 5121 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961782 5121 feature_gate.go:328] unrecognized feature gate: Example2 Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961785 5121 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961788 5121 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961792 5121 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961796 5121 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961799 5121 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961802 5121 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961806 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961809 5121 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961812 5121 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961815 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961819 5121 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961822 5121 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961825 5121 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961829 5121 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961832 5121 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961835 5121 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961839 5121 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961842 5121 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961845 5121 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961849 5121 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961852 5121 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961856 5121 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961859 5121 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961862 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961865 5121 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961870 5121 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961873 5121 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961877 5121 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961881 5121 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961885 5121 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961890 5121 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961895 5121 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961899 5121 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961903 5121 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961906 5121 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961910 5121 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961914 5121 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961917 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961921 5121 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961924 5121 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961928 5121 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961931 5121 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961934 5121 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961938 5121 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 18 00:08:36 crc kubenswrapper[5121]: W0218 00:08:36.961941 5121 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.961947 5121 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.962751 5121 server.go:962] "Client rotation is on, will bootstrap in background" Feb 18 00:08:36 crc kubenswrapper[5121]: E0218 00:08:36.965844 5121 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.969421 5121 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.969535 5121 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.970687 5121 server.go:1019] "Starting client certificate rotation" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.970918 5121 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 18 00:08:36 crc kubenswrapper[5121]: I0218 00:08:36.971057 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.001789 5121 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.005025 5121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.005504 5121 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.023987 5121 log.go:25] "Validated CRI v1 runtime API" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.079161 5121 log.go:25] "Validated CRI v1 image API" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.082071 5121 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.088214 5121 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-18-00-02-13-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.088277 5121 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.108842 5121 manager.go:217] Machine: {Timestamp:2026-02-18 00:08:37.106430826 +0000 UTC m=+0.620888581 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33649913856 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:48370276-1fd8-44a9-96f1-caf0cd2b4c95 BootID:71477c84-568f-4f6d-8a8d-dd02a666cc72 Filesystems:[{Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824958976 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824954880 Type:vfs Inodes:4107655 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107655 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1d:53:4b Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1d:53:4b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ba:52:c0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:ef:65:70 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:63:a4:e6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a5:3e:b8 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:7e:5f:ab:e8:f6:96 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1e:50:47:97:73:e3 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649913856 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.109116 5121 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.109312 5121 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.111347 5121 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.111397 5121 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.111678 5121 topology_manager.go:138] "Creating topology manager with none policy" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.111694 5121 container_manager_linux.go:306] "Creating device plugin manager" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.112275 5121 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.113132 5121 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.114009 5121 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.114201 5121 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.117180 5121 kubelet.go:491] "Attempting to sync node with API server" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.117234 5121 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.117254 5121 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.117269 5121 kubelet.go:397] "Adding apiserver pod source" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.117473 5121 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.120040 5121 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.120086 5121 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.128123 5121 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.128101 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.128179 5121 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.128506 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.135545 5121 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.136088 5121 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.137136 5121 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138769 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138814 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138830 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138845 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138858 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138874 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138888 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138902 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138917 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138940 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.138961 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.139489 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.140717 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.140758 5121 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.142723 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.168434 5121 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.168563 5121 server.go:1295] "Started kubelet" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.168824 5121 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.168972 5121 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.169568 5121 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.169794 5121 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 18 00:08:37 crc systemd[1]: Started Kubernetes Kubelet. Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.172204 5121 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.173002 5121 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.173197 5121 server.go:317] "Adding debug handlers to kubelet server" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.174355 5121 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.174405 5121 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.174444 5121 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.174369 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.174514 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.173517 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18952ea5966ee7ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,LastTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.174988 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.176387 5121 factory.go:55] Registering systemd factory Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.176464 5121 factory.go:223] Registration of the systemd container factory successfully Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.177141 5121 factory.go:153] Registering CRI-O factory Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.177201 5121 factory.go:223] Registration of the crio container factory successfully Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.177436 5121 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.177513 5121 factory.go:103] Registering Raw factory Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.177547 5121 manager.go:1196] Started watching for new ooms in manager Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.181255 5121 manager.go:319] Starting recovery of all containers Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.219434 5121 manager.go:324] Recovery completed Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.243197 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.246225 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.246335 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.246355 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.247430 5121 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.247454 5121 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.247486 5121 state_mem.go:36] "Initialized new in-memory state store" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.260573 5121 policy_none.go:49] "None policy: Start" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.260642 5121 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.260690 5121 state_mem.go:35] "Initializing new in-memory state store" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.262632 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.262825 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.262851 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.262921 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.262945 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263006 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263027 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263076 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263103 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263124 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263182 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263203 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263255 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263283 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263341 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263371 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263390 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263640 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263700 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263724 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263777 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263798 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263819 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263871 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263893 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263916 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.263975 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264009 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264095 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264153 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264174 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264544 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264616 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264430 5121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.264643 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266120 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266149 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266173 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266193 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266213 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266232 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266255 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266310 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266330 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266348 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266377 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266396 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.266416 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.269175 5121 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.269263 5121 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.269340 5121 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.269359 5121 kubelet.go:2451] "Starting kubelet main sync loop" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.269459 5121 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271058 5121 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271118 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271144 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271165 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271186 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271219 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271242 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271282 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271301 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271324 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271378 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271397 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271416 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271437 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271456 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271485 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271506 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271525 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271547 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271568 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271585 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271603 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271641 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271701 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271719 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271742 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271761 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271780 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271799 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271819 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271842 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271861 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271881 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271903 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.271924 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272019 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272040 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272077 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272096 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272116 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272139 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272158 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272177 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272195 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.272189 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272213 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272233 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272250 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272269 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272286 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272304 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272323 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272342 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272362 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272383 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272402 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272420 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272438 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272458 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272477 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272496 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272515 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272533 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272550 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272568 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272586 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272606 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272646 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272694 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272713 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272731 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272749 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272766 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272784 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272806 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272825 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272844 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272865 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272887 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272906 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272925 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272943 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272964 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.272982 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273001 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273020 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273038 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273057 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273075 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273093 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273116 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273134 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273153 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273172 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273191 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273212 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273231 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273250 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273271 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273292 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273310 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273329 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273348 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273368 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273387 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273597 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273619 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273636 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273679 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273698 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273720 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273739 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273758 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273776 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273798 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273818 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273838 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273857 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273877 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273896 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273914 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273933 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273954 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273973 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.273992 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274035 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274057 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274075 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274094 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274114 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274133 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274152 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274171 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274191 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274213 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274232 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274252 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274271 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274291 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274309 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274329 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274347 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274367 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274386 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274404 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274424 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274443 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274460 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274476 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274492 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274511 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274529 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274545 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274563 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274581 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274599 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274617 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274687 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274708 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274726 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274745 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274764 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274782 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274801 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274820 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274838 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274858 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274877 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274896 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274915 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274936 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.274999 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275020 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275046 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275067 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275086 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275107 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275127 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275146 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275212 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275230 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275249 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275268 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275306 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275327 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275345 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275366 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275386 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275404 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275424 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275442 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275460 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275478 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275496 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275515 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275534 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275552 5121 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275569 5121 reconstruct.go:97] "Volume reconstruction finished" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.275582 5121 reconciler.go:26] "Reconciler: start to sync state" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.275642 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.316235 5121 manager.go:341] "Starting Device Plugin manager" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.316541 5121 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.316565 5121 server.go:85] "Starting device plugin registration server" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.317148 5121 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.317170 5121 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.317434 5121 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.317562 5121 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.317571 5121 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.321524 5121 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.321619 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.370615 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.370862 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.372125 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.372168 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.372183 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.372927 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.373510 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.373634 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.374823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.374885 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.374899 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.375005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.375054 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.375070 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.375527 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.375982 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.376127 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.376218 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.376774 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377120 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377135 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377260 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377293 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377304 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377322 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377396 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377725 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377755 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.377783 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378298 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378373 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378509 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378576 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378586 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378725 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.378751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379155 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379206 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379220 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.379561 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.380223 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.380709 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.380747 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381081 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381115 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381132 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381529 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381561 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.381571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.382079 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.382142 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.382908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.382944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.382957 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.413455 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.418284 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.419601 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.419766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.419900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.420027 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.420841 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.437130 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.449767 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478684 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478827 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478863 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478821 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.478993 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479037 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479082 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479128 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479206 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479306 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479339 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479362 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479372 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479403 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479436 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479479 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479515 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479590 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479624 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479690 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479737 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.479774 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.482523 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.483080 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.489515 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581550 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581703 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581673 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581763 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581795 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581821 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581840 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581883 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581884 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581922 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581927 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581940 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581956 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581958 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581940 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.582049 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.582012 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.581923 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.582106 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.582140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.621596 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.623444 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.623534 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.623558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.623600 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.624274 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.715229 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.738760 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.751092 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: W0218 00:08:37.763851 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-86adf86a48c5132c0aaf21fd1a7157741633670b9eca4b3393642defd6170855 WatchSource:0}: Error finding container 86adf86a48c5132c0aaf21fd1a7157741633670b9eca4b3393642defd6170855: Status 404 returned error can't find the container with id 86adf86a48c5132c0aaf21fd1a7157741633670b9eca4b3393642defd6170855 Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.771115 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.811295 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: I0218 00:08:37.811610 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 18 00:08:37 crc kubenswrapper[5121]: E0218 00:08:37.811632 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Feb 18 00:08:37 crc kubenswrapper[5121]: W0218 00:08:37.815139 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-4ee67f938b8dcea9c37ac438d071fbc16acbbafc5335f380b1405636d0b1e41a WatchSource:0}: Error finding container 4ee67f938b8dcea9c37ac438d071fbc16acbbafc5335f380b1405636d0b1e41a: Status 404 returned error can't find the container with id 4ee67f938b8dcea9c37ac438d071fbc16acbbafc5335f380b1405636d0b1e41a Feb 18 00:08:37 crc kubenswrapper[5121]: W0218 00:08:37.815604 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-e8094d1706eb0efb6f996d1058c60d529e5afe73d4af81ec4c8b29491ff0718b WatchSource:0}: Error finding container e8094d1706eb0efb6f996d1058c60d529e5afe73d4af81ec4c8b29491ff0718b: Status 404 returned error can't find the container with id e8094d1706eb0efb6f996d1058c60d529e5afe73d4af81ec4c8b29491ff0718b Feb 18 00:08:37 crc kubenswrapper[5121]: W0218 00:08:37.843491 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-e3ae156952cecda3a92b0861308fec6730e16825484676a8250321582ad8b094 WatchSource:0}: Error finding container e3ae156952cecda3a92b0861308fec6730e16825484676a8250321582ad8b094: Status 404 returned error can't find the container with id e3ae156952cecda3a92b0861308fec6730e16825484676a8250321582ad8b094 Feb 18 00:08:37 crc kubenswrapper[5121]: W0218 00:08:37.850519 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-b88eb5c1f8e5ea462958e2b6859e630deff8c95ad4335e013e7645b34bbdb041 WatchSource:0}: Error finding container b88eb5c1f8e5ea462958e2b6859e630deff8c95ad4335e013e7645b34bbdb041: Status 404 returned error can't find the container with id b88eb5c1f8e5ea462958e2b6859e630deff8c95ad4335e013e7645b34bbdb041 Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.025396 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.026627 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.026758 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.026782 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.026818 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.027564 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.143666 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.275253 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"b88eb5c1f8e5ea462958e2b6859e630deff8c95ad4335e013e7645b34bbdb041"} Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.275700 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.277420 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e3ae156952cecda3a92b0861308fec6730e16825484676a8250321582ad8b094"} Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.278825 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e8094d1706eb0efb6f996d1058c60d529e5afe73d4af81ec4c8b29491ff0718b"} Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.279875 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4ee67f938b8dcea9c37ac438d071fbc16acbbafc5335f380b1405636d0b1e41a"} Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.281237 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"86adf86a48c5132c0aaf21fd1a7157741633670b9eca4b3393642defd6170855"} Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.465614 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.490622 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.528572 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.613184 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.828713 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.830585 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.830731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.830756 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:38 crc kubenswrapper[5121]: I0218 00:08:38.830822 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:38 crc kubenswrapper[5121]: E0218 00:08:38.831703 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.093675 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 18 00:08:39 crc kubenswrapper[5121]: E0218 00:08:39.095233 5121 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.144907 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.286110 5121 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf" exitCode=0 Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.286237 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.286348 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.287375 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.287430 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.287440 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:39 crc kubenswrapper[5121]: E0218 00:08:39.287694 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.289246 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" exitCode=0 Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.289337 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.289446 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.290675 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.290720 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.290735 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:39 crc kubenswrapper[5121]: E0218 00:08:39.291070 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.292376 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.292420 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.294026 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.294418 5121 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8" exitCode=0 Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.294532 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.294517 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.294986 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.295020 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.295031 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.295254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.295302 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.295318 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:39 crc kubenswrapper[5121]: E0218 00:08:39.295756 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.296758 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1" exitCode=0 Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.296801 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1"} Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.296891 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.297390 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.297424 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:39 crc kubenswrapper[5121]: I0218 00:08:39.297433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:39 crc kubenswrapper[5121]: E0218 00:08:39.297631 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.144256 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.154:6443: connect: connection refused Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.214258 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.313860 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.313919 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.313931 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.314040 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.315748 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.315809 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.315824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.316235 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.318794 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.318834 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.318847 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.322205 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.322253 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.322703 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.324123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.324169 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.324179 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.324469 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.325419 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.325427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.326065 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.326094 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.326104 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.326314 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.328116 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b" exitCode=0 Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.328179 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b"} Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.328300 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.329375 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.329408 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.329419 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.329759 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.432793 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.434028 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.434082 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.434122 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:40 crc kubenswrapper[5121]: I0218 00:08:40.434149 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.434714 5121 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.154:6443: connect: connection refused" node="crc" Feb 18 00:08:40 crc kubenswrapper[5121]: E0218 00:08:40.901695 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.154:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.334433 5121 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899" exitCode=0 Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.334517 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899"} Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.334704 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.335317 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.335354 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.335366 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:41 crc kubenswrapper[5121]: E0218 00:08:41.335691 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341262 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e557acaeb0421de96a46c8b928250a661520e96302b53aa78465c02cff1e99b7"} Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341324 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc"} Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341358 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341281 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341447 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341362 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.341698 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.342291 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.342727 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.347747 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.347773 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.346090 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.347907 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.347945 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.346361 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.348139 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.348162 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:41 crc kubenswrapper[5121]: E0218 00:08:41.348407 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:41 crc kubenswrapper[5121]: E0218 00:08:41.348587 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.349169 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:41 crc kubenswrapper[5121]: I0218 00:08:41.349251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:41 crc kubenswrapper[5121]: E0218 00:08:41.349404 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:41 crc kubenswrapper[5121]: E0218 00:08:41.349760 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349635 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d"} Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c"} Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349729 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c"} Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349790 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349832 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.349790 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350838 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350887 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350901 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350855 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:42 crc kubenswrapper[5121]: I0218 00:08:42.350961 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:42 crc kubenswrapper[5121]: E0218 00:08:42.351317 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:42 crc kubenswrapper[5121]: E0218 00:08:42.351569 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.111373 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.362740 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be"} Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.362856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc"} Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.362970 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.363030 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364212 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364240 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364293 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.364321 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:43 crc kubenswrapper[5121]: E0218 00:08:43.365156 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:43 crc kubenswrapper[5121]: E0218 00:08:43.365332 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.635328 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.637391 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.637464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.637482 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:43 crc kubenswrapper[5121]: I0218 00:08:43.637509 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.365212 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.366001 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.366043 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.366060 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:44 crc kubenswrapper[5121]: E0218 00:08:44.366732 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.888179 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.888587 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.889748 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.889793 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:44 crc kubenswrapper[5121]: I0218 00:08:44.889804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:44 crc kubenswrapper[5121]: E0218 00:08:44.890175 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.094084 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.368226 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.369174 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.369245 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.369261 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:45 crc kubenswrapper[5121]: E0218 00:08:45.370060 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.722446 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.723029 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.724259 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.724321 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:45 crc kubenswrapper[5121]: I0218 00:08:45.724345 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:45 crc kubenswrapper[5121]: E0218 00:08:45.725194 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.198152 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.198483 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.199993 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.200070 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.200091 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:47 crc kubenswrapper[5121]: E0218 00:08:47.200793 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:47 crc kubenswrapper[5121]: E0218 00:08:47.321933 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.882284 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.882641 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.884034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.884113 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.884139 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:47 crc kubenswrapper[5121]: E0218 00:08:47.884843 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.919226 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.929501 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:47 crc kubenswrapper[5121]: I0218 00:08:47.970446 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:48 crc kubenswrapper[5121]: I0218 00:08:48.378733 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:48 crc kubenswrapper[5121]: I0218 00:08:48.380291 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:48 crc kubenswrapper[5121]: I0218 00:08:48.380367 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:48 crc kubenswrapper[5121]: I0218 00:08:48.380396 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:48 crc kubenswrapper[5121]: E0218 00:08:48.381280 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:48 crc kubenswrapper[5121]: I0218 00:08:48.392248 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.253064 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.253557 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.254907 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.254978 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.255007 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:49 crc kubenswrapper[5121]: E0218 00:08:49.255765 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.386197 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.387387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.387531 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:49 crc kubenswrapper[5121]: I0218 00:08:49.387558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:49 crc kubenswrapper[5121]: E0218 00:08:49.388552 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.389129 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.390163 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.390213 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.390233 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:50 crc kubenswrapper[5121]: E0218 00:08:50.390750 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.970877 5121 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 18 00:08:50 crc kubenswrapper[5121]: I0218 00:08:50.970987 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 18 00:08:51 crc kubenswrapper[5121]: I0218 00:08:51.037292 5121 trace.go:236] Trace[1737184919]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:08:41.036) (total time: 10001ms): Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[1737184919]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:51.037) Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[1737184919]: [10.001196331s] [10.001196331s] END Feb 18 00:08:51 crc kubenswrapper[5121]: E0218 00:08:51.037365 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:08:51 crc kubenswrapper[5121]: I0218 00:08:51.145750 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 18 00:08:51 crc kubenswrapper[5121]: I0218 00:08:51.390308 5121 trace.go:236] Trace[1630986323]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:08:41.389) (total time: 10000ms): Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[1630986323]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:08:51.390) Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[1630986323]: [10.000994904s] [10.000994904s] END Feb 18 00:08:51 crc kubenswrapper[5121]: E0218 00:08:51.390392 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:08:51 crc kubenswrapper[5121]: E0218 00:08:51.483817 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.18952ea5966ee7ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,LastTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:08:51 crc kubenswrapper[5121]: I0218 00:08:51.493269 5121 trace.go:236] Trace[357856375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:08:41.491) (total time: 10001ms): Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[357856375]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:08:51.493) Feb 18 00:08:51 crc kubenswrapper[5121]: Trace[357856375]: [10.001525899s] [10.001525899s] END Feb 18 00:08:51 crc kubenswrapper[5121]: E0218 00:08:51.493340 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:08:52 crc kubenswrapper[5121]: I0218 00:08:52.263548 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:08:52 crc kubenswrapper[5121]: I0218 00:08:52.263639 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:08:52 crc kubenswrapper[5121]: I0218 00:08:52.270437 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:08:52 crc kubenswrapper[5121]: I0218 00:08:52.270557 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 18 00:08:53 crc kubenswrapper[5121]: E0218 00:08:53.415486 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.729942 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.730228 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.731165 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.731220 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.731231 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:55 crc kubenswrapper[5121]: E0218 00:08:55.731615 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:55 crc kubenswrapper[5121]: I0218 00:08:55.736232 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:08:56 crc kubenswrapper[5121]: E0218 00:08:56.256150 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:08:56 crc kubenswrapper[5121]: I0218 00:08:56.405980 5121 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 18 00:08:56 crc kubenswrapper[5121]: I0218 00:08:56.406062 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:56 crc kubenswrapper[5121]: I0218 00:08:56.407181 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:56 crc kubenswrapper[5121]: I0218 00:08:56.407251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:56 crc kubenswrapper[5121]: I0218 00:08:56.407278 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:56 crc kubenswrapper[5121]: E0218 00:08:56.408038 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.269952 5121 trace.go:236] Trace[72208094]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (18-Feb-2026 00:08:45.582) (total time: 11687ms): Feb 18 00:08:57 crc kubenswrapper[5121]: Trace[72208094]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11687ms (00:08:57.269) Feb 18 00:08:57 crc kubenswrapper[5121]: Trace[72208094]: [11.687724906s] [11.687724906s] END Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.270024 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.271010 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.273440 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.278568 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309142 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309247 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309142 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": EOF" start-of-body= Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309470 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": EOF" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309827 5121 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.309954 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.322445 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.392261 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.411147 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.413108 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e557acaeb0421de96a46c8b928250a661520e96302b53aa78465c02cff1e99b7" exitCode=255 Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.413208 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"e557acaeb0421de96a46c8b928250a661520e96302b53aa78465c02cff1e99b7"} Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.413497 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.414284 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.414333 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.414350 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.414769 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.415128 5121 scope.go:117] "RemoveContainer" containerID="e557acaeb0421de96a46c8b928250a661520e96302b53aa78465c02cff1e99b7" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.500215 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.978291 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.978584 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.980069 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.980126 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.980139 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:57 crc kubenswrapper[5121]: E0218 00:08:57.980541 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:57 crc kubenswrapper[5121]: I0218 00:08:57.985501 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.150841 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.417702 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.419456 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51"} Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.419529 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.419770 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420167 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420304 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420399 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420439 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420456 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:58 crc kubenswrapper[5121]: I0218 00:08:58.420407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:58 crc kubenswrapper[5121]: E0218 00:08:58.420884 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:58 crc kubenswrapper[5121]: E0218 00:08:58.421156 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.149318 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.285453 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.285753 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.286746 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.286798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.286811 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:59 crc kubenswrapper[5121]: E0218 00:08:59.287294 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.298090 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.424038 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.424825 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.427547 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" exitCode=255 Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.427642 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51"} Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.427758 5121 scope.go:117] "RemoveContainer" containerID="e557acaeb0421de96a46c8b928250a661520e96302b53aa78465c02cff1e99b7" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.427795 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.429252 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.429283 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.429338 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.429351 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:59 crc kubenswrapper[5121]: E0218 00:08:59.429941 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.431047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.431090 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.431101 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:08:59 crc kubenswrapper[5121]: E0218 00:08:59.431481 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:08:59 crc kubenswrapper[5121]: I0218 00:08:59.431879 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:08:59 crc kubenswrapper[5121]: E0218 00:08:59.432950 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:08:59 crc kubenswrapper[5121]: E0218 00:08:59.828013 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:00 crc kubenswrapper[5121]: I0218 00:09:00.148962 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:00 crc kubenswrapper[5121]: I0218 00:09:00.432672 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 18 00:09:01 crc kubenswrapper[5121]: I0218 00:09:01.148472 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.491251 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea5966ee7ff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,LastTimestamp:2026-02-18 00:08:37.168490495 +0000 UTC m=+0.682948260,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.497834 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.505573 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.510800 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.516463 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59fb0a659 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.323794009 +0000 UTC m=+0.838251744,LastTimestamp:2026-02-18 00:08:37.323794009 +0000 UTC m=+0.838251744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.522998 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.372146527 +0000 UTC m=+0.886604282,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.530204 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.372174788 +0000 UTC m=+0.886632543,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.535346 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.372189088 +0000 UTC m=+0.886646833,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.540159 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.374868947 +0000 UTC m=+0.889326682,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.547516 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.374892337 +0000 UTC m=+0.889350072,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.552407 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.374904888 +0000 UTC m=+0.889362623,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.556913 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.375035503 +0000 UTC m=+0.889493238,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.562378 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.375062114 +0000 UTC m=+0.889519849,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.569397 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.375077154 +0000 UTC m=+0.889534889,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.577703 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.377110061 +0000 UTC m=+0.891567796,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.583902 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.377128091 +0000 UTC m=+0.891585826,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.592225 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.377142262 +0000 UTC m=+0.891599997,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.599112 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.377272236 +0000 UTC m=+0.891729961,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.606368 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.377299947 +0000 UTC m=+0.891757682,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.613511 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.377308477 +0000 UTC m=+0.891766212,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.621237 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.379177868 +0000 UTC m=+0.893635613,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.629066 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.379213629 +0000 UTC m=+0.893671374,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.635211 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b132005\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b132005 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246361605 +0000 UTC m=+0.760819340,LastTimestamp:2026-02-18 00:08:37.37922705 +0000 UTC m=+0.893684795,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.640385 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b122a13\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b122a13 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246298643 +0000 UTC m=+0.760756388,LastTimestamp:2026-02-18 00:08:37.379502149 +0000 UTC m=+0.893959914,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.645400 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18952ea59b12e846\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18952ea59b12e846 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.246347334 +0000 UTC m=+0.760805069,LastTimestamp:2026-02-18 00:08:37.379551251 +0000 UTC m=+0.894009026,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.650850 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5ba618b08 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.771594504 +0000 UTC m=+1.286052279,LastTimestamp:2026-02-18 00:08:37.771594504 +0000 UTC m=+1.286052279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.659428 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea5bd8bddd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.824699859 +0000 UTC m=+1.339157614,LastTimestamp:2026-02-18 00:08:37.824699859 +0000 UTC m=+1.339157614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.662475 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea5bd8dfadb openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.824838363 +0000 UTC m=+1.339296158,LastTimestamp:2026-02-18 00:08:37.824838363 +0000 UTC m=+1.339296158,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.663946 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea5bf27808f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.851676815 +0000 UTC m=+1.366134570,LastTimestamp:2026-02-18 00:08:37.851676815 +0000 UTC m=+1.366134570,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.669634 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea5bf69f1b6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:37.856031158 +0000 UTC m=+1.370488903,LastTimestamp:2026-02-18 00:08:37.856031158 +0000 UTC m=+1.370488903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.671011 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea5e82384b5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.539281589 +0000 UTC m=+2.053739354,LastTimestamp:2026-02-18 00:08:38.539281589 +0000 UTC m=+2.053739354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.673890 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea5e8256c9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.539406493 +0000 UTC m=+2.053864238,LastTimestamp:2026-02-18 00:08:38.539406493 +0000 UTC m=+2.053864238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.675863 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea5e82583fa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.539412474 +0000 UTC m=+2.053870219,LastTimestamp:2026-02-18 00:08:38.539412474 +0000 UTC m=+2.053870219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.677694 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5e8311b93 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.540172179 +0000 UTC m=+2.054629934,LastTimestamp:2026-02-18 00:08:38.540172179 +0000 UTC m=+2.054629934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.679717 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea5e8334788 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.540314504 +0000 UTC m=+2.054772259,LastTimestamp:2026-02-18 00:08:38.540314504 +0000 UTC m=+2.054772259,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.681895 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea5e9049a5f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.554032735 +0000 UTC m=+2.068490490,LastTimestamp:2026-02-18 00:08:38.554032735 +0000 UTC m=+2.068490490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.684189 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5e91dd46f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.555685999 +0000 UTC m=+2.070143754,LastTimestamp:2026-02-18 00:08:38.555685999 +0000 UTC m=+2.070143754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.686185 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea5e934b2b0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.557184688 +0000 UTC m=+2.071642433,LastTimestamp:2026-02-18 00:08:38.557184688 +0000 UTC m=+2.071642433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.689182 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5e9370eed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.557339373 +0000 UTC m=+2.071797108,LastTimestamp:2026-02-18 00:08:38.557339373 +0000 UTC m=+2.071797108,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.690379 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea5e94ebda0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.558891424 +0000 UTC m=+2.073349159,LastTimestamp:2026-02-18 00:08:38.558891424 +0000 UTC m=+2.073349159,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.693598 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea5e9681aba openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.560553658 +0000 UTC m=+2.075011433,LastTimestamp:2026-02-18 00:08:38.560553658 +0000 UTC m=+2.075011433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.698508 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5fae96439 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.854239289 +0000 UTC m=+2.368697024,LastTimestamp:2026-02-18 00:08:38.854239289 +0000 UTC m=+2.368697024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.702585 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5fbdea839 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.870313017 +0000 UTC m=+2.384770752,LastTimestamp:2026-02-18 00:08:38.870313017 +0000 UTC m=+2.384770752,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.706381 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea5fbf7ada1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:38.871952801 +0000 UTC m=+2.386410566,LastTimestamp:2026-02-18 00:08:38.871952801 +0000 UTC m=+2.386410566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.710786 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea614d5e5ea openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.289169386 +0000 UTC m=+2.803627131,LastTimestamp:2026-02-18 00:08:39.289169386 +0000 UTC m=+2.803627131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.715560 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea6151d84a9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.293863081 +0000 UTC m=+2.808320816,LastTimestamp:2026-02-18 00:08:39.293863081 +0000 UTC m=+2.808320816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.719843 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea6159f54c2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.302370498 +0000 UTC m=+2.816828233,LastTimestamp:2026-02-18 00:08:39.302370498 +0000 UTC m=+2.816828233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.723539 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea615a43d8c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.302692236 +0000 UTC m=+2.817149971,LastTimestamp:2026-02-18 00:08:39.302692236 +0000 UTC m=+2.817149971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.730069 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea6193ca16d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.363010925 +0000 UTC m=+2.877468670,LastTimestamp:2026-02-18 00:08:39.363010925 +0000 UTC m=+2.877468670,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.738385 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea61a17016b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.377322347 +0000 UTC m=+2.891780082,LastTimestamp:2026-02-18 00:08:39.377322347 +0000 UTC m=+2.891780082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.743539 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea61a437cdf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.380237535 +0000 UTC m=+2.894695270,LastTimestamp:2026-02-18 00:08:39.380237535 +0000 UTC m=+2.894695270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.749112 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea6271eb332 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.595930418 +0000 UTC m=+3.110388153,LastTimestamp:2026-02-18 00:08:39.595930418 +0000 UTC m=+3.110388153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.753099 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea62767de5c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.600725596 +0000 UTC m=+3.115183331,LastTimestamp:2026-02-18 00:08:39.600725596 +0000 UTC m=+3.115183331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.757888 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea6276a67a2 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.60089181 +0000 UTC m=+3.115349545,LastTimestamp:2026-02-18 00:08:39.60089181 +0000 UTC m=+3.115349545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.762290 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea6277a6f9c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.601942428 +0000 UTC m=+3.116400163,LastTimestamp:2026-02-18 00:08:39.601942428 +0000 UTC m=+3.116400163,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.767553 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea627f22aaa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.609789098 +0000 UTC m=+3.124246843,LastTimestamp:2026-02-18 00:08:39.609789098 +0000 UTC m=+3.124246843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.772009 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea62805f8b8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.611087032 +0000 UTC m=+3.125544787,LastTimestamp:2026-02-18 00:08:39.611087032 +0000 UTC m=+3.125544787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.778993 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea628d8eee2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.62491261 +0000 UTC m=+3.139370365,LastTimestamp:2026-02-18 00:08:39.62491261 +0000 UTC m=+3.139370365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.783461 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea628edf5c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.626290627 +0000 UTC m=+3.140748362,LastTimestamp:2026-02-18 00:08:39.626290627 +0000 UTC m=+3.140748362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.789149 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18952ea6292f1499 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.630558361 +0000 UTC m=+3.145016106,LastTimestamp:2026-02-18 00:08:39.630558361 +0000 UTC m=+3.145016106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.795074 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea629a8907e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.638519934 +0000 UTC m=+3.152977669,LastTimestamp:2026-02-18 00:08:39.638519934 +0000 UTC m=+3.152977669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.799462 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea629affe4c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.639006796 +0000 UTC m=+3.153464531,LastTimestamp:2026-02-18 00:08:39.639006796 +0000 UTC m=+3.153464531,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.803674 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea62b0bcdb5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.661800885 +0000 UTC m=+3.176258620,LastTimestamp:2026-02-18 00:08:39.661800885 +0000 UTC m=+3.176258620,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.812494 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea634c52bcc openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.824944076 +0000 UTC m=+3.339401811,LastTimestamp:2026-02-18 00:08:39.824944076 +0000 UTC m=+3.339401811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.819113 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea635b326d4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.840540372 +0000 UTC m=+3.354998117,LastTimestamp:2026-02-18 00:08:39.840540372 +0000 UTC m=+3.354998117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.823429 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea635c5cb43 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.841762115 +0000 UTC m=+3.356219850,LastTimestamp:2026-02-18 00:08:39.841762115 +0000 UTC m=+3.356219850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.827882 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea639c6a6aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.908927146 +0000 UTC m=+3.423384881,LastTimestamp:2026-02-18 00:08:39.908927146 +0000 UTC m=+3.423384881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.832541 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea63ad07409 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.926346761 +0000 UTC m=+3.440804496,LastTimestamp:2026-02-18 00:08:39.926346761 +0000 UTC m=+3.440804496,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.839087 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea63af945b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:39.929021873 +0000 UTC m=+3.443479718,LastTimestamp:2026-02-18 00:08:39.929021873 +0000 UTC m=+3.443479718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.843176 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea643d023e3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.077321187 +0000 UTC m=+3.591778932,LastTimestamp:2026-02-18 00:08:40.077321187 +0000 UTC m=+3.591778932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.847195 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18952ea6451e7e0c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.099233292 +0000 UTC m=+3.613691027,LastTimestamp:2026-02-18 00:08:40.099233292 +0000 UTC m=+3.613691027,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.852381 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea648b441a5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.159379877 +0000 UTC m=+3.673837612,LastTimestamp:2026-02-18 00:08:40.159379877 +0000 UTC m=+3.673837612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.856355 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea649b1a30c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.17598542 +0000 UTC m=+3.690443155,LastTimestamp:2026-02-18 00:08:40.17598542 +0000 UTC m=+3.690443155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.860509 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea649c68fca openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.177356746 +0000 UTC m=+3.691814481,LastTimestamp:2026-02-18 00:08:40.177356746 +0000 UTC m=+3.691814481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.862281 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea652f1ae75 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.331177589 +0000 UTC m=+3.845635324,LastTimestamp:2026-02-18 00:08:40.331177589 +0000 UTC m=+3.845635324,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.867025 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea6576cce1c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.406355484 +0000 UTC m=+3.920813219,LastTimestamp:2026-02-18 00:08:40.406355484 +0000 UTC m=+3.920813219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.872082 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea6591ec363 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.434795363 +0000 UTC m=+3.949253098,LastTimestamp:2026-02-18 00:08:40.434795363 +0000 UTC m=+3.949253098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.876347 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea65937d721 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,LastTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.881432 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6626b145d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.590791773 +0000 UTC m=+4.105249508,LastTimestamp:2026-02-18 00:08:40.590791773 +0000 UTC m=+4.105249508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.886239 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea663a49e60 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.611339872 +0000 UTC m=+4.125797597,LastTimestamp:2026-02-18 00:08:40.611339872 +0000 UTC m=+4.125797597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.891226 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66ba4c6a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,LastTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.898776 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66c5e8e02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,LastTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.904614 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea68ee99384 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.337279364 +0000 UTC m=+4.851737099,LastTimestamp:2026-02-18 00:08:41.337279364 +0000 UTC m=+4.851737099,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.909519 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea69ec75502 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.603470594 +0000 UTC m=+5.117928329,LastTimestamp:2026-02-18 00:08:41.603470594 +0000 UTC m=+5.117928329,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.915560 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6a0348612 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.627403794 +0000 UTC m=+5.141861569,LastTimestamp:2026-02-18 00:08:41.627403794 +0000 UTC m=+5.141861569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.921072 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6a04f4516 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.62915663 +0000 UTC m=+5.143614365,LastTimestamp:2026-02-18 00:08:41.62915663 +0000 UTC m=+5.143614365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.925754 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6ad1661ba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.843532218 +0000 UTC m=+5.357989963,LastTimestamp:2026-02-18 00:08:41.843532218 +0000 UTC m=+5.357989963,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.929942 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6ae76e9d6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.866635734 +0000 UTC m=+5.381093469,LastTimestamp:2026-02-18 00:08:41.866635734 +0000 UTC m=+5.381093469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.936307 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6ae968877 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:41.868707959 +0000 UTC m=+5.383165694,LastTimestamp:2026-02-18 00:08:41.868707959 +0000 UTC m=+5.383165694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.943004 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6bb00e884 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.077005956 +0000 UTC m=+5.591463691,LastTimestamp:2026-02-18 00:08:42.077005956 +0000 UTC m=+5.591463691,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.949110 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6bd3a3a32 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.11431685 +0000 UTC m=+5.628774625,LastTimestamp:2026-02-18 00:08:42.11431685 +0000 UTC m=+5.628774625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.953391 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6bd54fdac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.116070828 +0000 UTC m=+5.630528573,LastTimestamp:2026-02-18 00:08:42.116070828 +0000 UTC m=+5.630528573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.959736 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6cae07439 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.343314489 +0000 UTC m=+5.857772234,LastTimestamp:2026-02-18 00:08:42.343314489 +0000 UTC m=+5.857772234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.965152 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6cbc878ec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.358520044 +0000 UTC m=+5.872977779,LastTimestamp:2026-02-18 00:08:42.358520044 +0000 UTC m=+5.872977779,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.966219 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6cbf025f4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.361120244 +0000 UTC m=+5.875577979,LastTimestamp:2026-02-18 00:08:42.361120244 +0000 UTC m=+5.875577979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.973885 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6d5d3e6eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.527041259 +0000 UTC m=+6.041498994,LastTimestamp:2026-02-18 00:08:42.527041259 +0000 UTC m=+6.041498994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.980319 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18952ea6d6cd4ed5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:42.543386325 +0000 UTC m=+6.057844060,LastTimestamp:2026-02-18 00:08:42.543386325 +0000 UTC m=+6.057844060,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.989300 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 18 00:09:01 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-controller-manager-crc.18952ea8cd1fc35e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 18 00:09:01 crc kubenswrapper[5121]: body: Feb 18 00:09:01 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:50.970952542 +0000 UTC m=+14.485410317,LastTimestamp:2026-02-18 00:08:50.970952542 +0000 UTC m=+14.485410317,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:01 crc kubenswrapper[5121]: > Feb 18 00:09:01 crc kubenswrapper[5121]: E0218 00:09:01.994947 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18952ea8cd216dd3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:50.971061715 +0000 UTC m=+14.485519480,LastTimestamp:2026-02-18 00:08:50.971061715 +0000 UTC m=+14.485519480,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.001032 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2c1903 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 18 00:09:02 crc kubenswrapper[5121]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:09:02 crc kubenswrapper[5121]: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,LastTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.006327 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2d3083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,LastTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.016698 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea91a2c1903\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2c1903 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 18 00:09:02 crc kubenswrapper[5121]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 18 00:09:02 crc kubenswrapper[5121]: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263606531 +0000 UTC m=+15.778064266,LastTimestamp:2026-02-18 00:08:52.270513655 +0000 UTC m=+15.784971390,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.025047 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea91a2d3083\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea91a2d3083 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:52.263678083 +0000 UTC m=+15.778135838,LastTimestamp:2026-02-18 00:08:52.270592037 +0000 UTC m=+15.785049762,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.032993 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46e9f0d3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": EOF Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309212883 +0000 UTC m=+20.823670628,LastTimestamp:2026-02-18 00:08:57.309212883 +0000 UTC m=+20.823670628,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.039403 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46eb11e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309286885 +0000 UTC m=+20.823744630,LastTimestamp:2026-02-18 00:08:57.309286885 +0000 UTC m=+20.823744630,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.046280 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46ec7839 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": EOF Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309378617 +0000 UTC m=+20.823836392,LastTimestamp:2026-02-18 00:08:57.309378617 +0000 UTC m=+20.823836392,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.053497 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46eea6fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": EOF,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309521661 +0000 UTC m=+20.823979436,LastTimestamp:2026-02-18 00:08:57.309521661 +0000 UTC m=+20.823979436,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.060741 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 18 00:09:02 crc kubenswrapper[5121]: &Event{ObjectMeta:{kube-apiserver-crc.18952eaa46f47551 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 18 00:09:02 crc kubenswrapper[5121]: body: Feb 18 00:09:02 crc kubenswrapper[5121]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309902161 +0000 UTC m=+20.824359936,LastTimestamp:2026-02-18 00:08:57.309902161 +0000 UTC m=+20.824359936,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 18 00:09:02 crc kubenswrapper[5121]: > Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.065618 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaa46f5e433 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:57.309996083 +0000 UTC m=+20.824453858,LastTimestamp:2026-02-18 00:08:57.309996083 +0000 UTC m=+20.824453858,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.070283 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea65937d721\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea65937d721 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,LastTimestamp:2026-02-18 00:08:57.417552202 +0000 UTC m=+20.932009977,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.071504 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66ba4c6a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66ba4c6a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,LastTimestamp:2026-02-18 00:08:57.662373543 +0000 UTC m=+21.176831288,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.076790 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66c5e8e02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66c5e8e02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,LastTimestamp:2026-02-18 00:08:57.678954235 +0000 UTC m=+21.193411980,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: E0218 00:09:02.084155 5121 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:02 crc kubenswrapper[5121]: I0218 00:09:02.150038 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.150223 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.679053 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680627 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680780 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:03 crc kubenswrapper[5121]: I0218 00:09:03.680817 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:03 crc kubenswrapper[5121]: E0218 00:09:03.698724 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:04 crc kubenswrapper[5121]: I0218 00:09:04.150459 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.151544 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.562493 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.562918 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564106 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564394 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.564644 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.565635 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:05 crc kubenswrapper[5121]: I0218 00:09:05.566487 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.567157 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:05 crc kubenswrapper[5121]: E0218 00:09:05.575966 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:05.567078673 +0000 UTC m=+29.081536448,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:06 crc kubenswrapper[5121]: I0218 00:09:06.152321 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:06 crc kubenswrapper[5121]: E0218 00:09:06.700129 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:09:06 crc kubenswrapper[5121]: E0218 00:09:06.838367 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:07 crc kubenswrapper[5121]: I0218 00:09:07.149641 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:07 crc kubenswrapper[5121]: E0218 00:09:07.323519 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:07 crc kubenswrapper[5121]: E0218 00:09:07.640485 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.151172 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.182147 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.420991 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.422024 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.422964 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.423545 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:08 crc kubenswrapper[5121]: I0218 00:09:08.423988 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.424268 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:08 crc kubenswrapper[5121]: E0218 00:09:08.430614 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:08.424226081 +0000 UTC m=+31.938683826,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:09 crc kubenswrapper[5121]: E0218 00:09:09.114741 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:09:09 crc kubenswrapper[5121]: I0218 00:09:09.150421 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.149968 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.699856 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701623 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:10 crc kubenswrapper[5121]: I0218 00:09:10.701752 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:10 crc kubenswrapper[5121]: E0218 00:09:10.718700 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:11 crc kubenswrapper[5121]: I0218 00:09:11.151977 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:12 crc kubenswrapper[5121]: I0218 00:09:12.151816 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:13 crc kubenswrapper[5121]: I0218 00:09:13.150139 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:13 crc kubenswrapper[5121]: E0218 00:09:13.845257 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:14 crc kubenswrapper[5121]: I0218 00:09:14.150790 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:15 crc kubenswrapper[5121]: I0218 00:09:15.153021 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:16 crc kubenswrapper[5121]: I0218 00:09:16.150914 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.149516 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:17 crc kubenswrapper[5121]: E0218 00:09:17.324776 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.719338 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720746 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720773 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:17 crc kubenswrapper[5121]: I0218 00:09:17.720819 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:17 crc kubenswrapper[5121]: E0218 00:09:17.734431 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:18 crc kubenswrapper[5121]: I0218 00:09:18.148201 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:19 crc kubenswrapper[5121]: I0218 00:09:19.151186 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:20 crc kubenswrapper[5121]: I0218 00:09:20.152064 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:20 crc kubenswrapper[5121]: E0218 00:09:20.852395 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:21 crc kubenswrapper[5121]: I0218 00:09:21.151508 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.148751 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.269990 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271271 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.271386 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.272175 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:22 crc kubenswrapper[5121]: I0218 00:09:22.272711 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.278348 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea65937d721\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea65937d721 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.436438817 +0000 UTC m=+3.950896552,LastTimestamp:2026-02-18 00:09:22.2747512 +0000 UTC m=+45.789208945,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.523175 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66ba4c6a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66ba4c6a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.745567912 +0000 UTC m=+4.260025647,LastTimestamp:2026-02-18 00:09:22.517527877 +0000 UTC m=+46.031985612,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:22 crc kubenswrapper[5121]: E0218 00:09:22.533413 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952ea66c5e8e02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952ea66c5e8e02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:40.757743106 +0000 UTC m=+4.272200841,LastTimestamp:2026-02-18 00:09:22.527687607 +0000 UTC m=+46.042145352,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.150986 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.469096 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.505131 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.506107 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508847 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" exitCode=255 Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b"} Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.508974 5121 scope.go:117] "RemoveContainer" containerID="eb14850c7284e6e23700749b71ed3d1708fea272e47217ccc0c2cb0861becd51" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.509397 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512222 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512269 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.512708 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:23 crc kubenswrapper[5121]: I0218 00:09:23.512967 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.513198 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:23 crc kubenswrapper[5121]: E0218 00:09:23.519102 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:23.513149033 +0000 UTC m=+47.027606768,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.148966 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.515500 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.734793 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736162 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:24 crc kubenswrapper[5121]: I0218 00:09:24.736189 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:24 crc kubenswrapper[5121]: E0218 00:09:24.746119 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.150676 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.563028 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.563338 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565333 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565388 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.565404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.565834 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:25 crc kubenswrapper[5121]: I0218 00:09:25.566253 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.566607 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:25 crc kubenswrapper[5121]: E0218 00:09:25.572329 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:25.566567863 +0000 UTC m=+49.081025598,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:26 crc kubenswrapper[5121]: I0218 00:09:26.151555 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:27 crc kubenswrapper[5121]: I0218 00:09:27.150963 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:27 crc kubenswrapper[5121]: E0218 00:09:27.325803 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:27 crc kubenswrapper[5121]: E0218 00:09:27.860694 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.151494 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.398332 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.420758 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.421063 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422218 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422311 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.422338 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.423172 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:28 crc kubenswrapper[5121]: I0218 00:09:28.424389 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.424807 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.433444 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:28.424743389 +0000 UTC m=+51.939201164,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:28 crc kubenswrapper[5121]: E0218 00:09:28.918567 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 18 00:09:29 crc kubenswrapper[5121]: I0218 00:09:29.149204 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:29 crc kubenswrapper[5121]: E0218 00:09:29.850581 5121 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 18 00:09:30 crc kubenswrapper[5121]: I0218 00:09:30.151459 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.151683 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.747004 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748568 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748586 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:31 crc kubenswrapper[5121]: I0218 00:09:31.748621 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:31 crc kubenswrapper[5121]: E0218 00:09:31.762360 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.152008 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.355290 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.355530 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:32 crc kubenswrapper[5121]: I0218 00:09:32.356751 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:32 crc kubenswrapper[5121]: E0218 00:09:32.357891 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:33 crc kubenswrapper[5121]: I0218 00:09:33.149366 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:34 crc kubenswrapper[5121]: I0218 00:09:34.151268 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:34 crc kubenswrapper[5121]: E0218 00:09:34.867569 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:35 crc kubenswrapper[5121]: I0218 00:09:35.148507 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:36 crc kubenswrapper[5121]: I0218 00:09:36.151289 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:37 crc kubenswrapper[5121]: I0218 00:09:37.151172 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:37 crc kubenswrapper[5121]: E0218 00:09:37.326303 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.151344 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.763318 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765012 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765092 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:38 crc kubenswrapper[5121]: I0218 00:09:38.765148 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:38 crc kubenswrapper[5121]: E0218 00:09:38.776861 5121 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 18 00:09:39 crc kubenswrapper[5121]: I0218 00:09:39.149206 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:40 crc kubenswrapper[5121]: I0218 00:09:40.151216 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.148419 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.270145 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.271574 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.272062 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:41 crc kubenswrapper[5121]: I0218 00:09:41.272420 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.272735 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.282705 5121 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18952eaac57e8e35\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18952eaac57e8e35 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:08:59.432881717 +0000 UTC m=+22.947339462,LastTimestamp:2026-02-18 00:09:41.272698642 +0000 UTC m=+64.787156387,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:09:41 crc kubenswrapper[5121]: E0218 00:09:41.876620 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 18 00:09:42 crc kubenswrapper[5121]: I0218 00:09:42.149506 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:43 crc kubenswrapper[5121]: I0218 00:09:43.151324 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.148624 5121 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.297448 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xm5pj" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.308015 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-xm5pj" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.372844 5121 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 18 00:09:44 crc kubenswrapper[5121]: I0218 00:09:44.970971 5121 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.270140 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.271953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.272024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.272037 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.272531 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.310098 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-20 00:04:44 +0000 UTC" deadline="2026-03-13 19:35:18.125232573 +0000 UTC" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.310174 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="571h25m32.81506488s" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.777593 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779030 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779136 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.779339 5121 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.791101 5121 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.791550 5121 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.791591 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796521 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796596 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796679 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.796708 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.823437 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837567 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837777 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.837808 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.850560 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861075 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861133 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861149 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861172 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.861185 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.871207 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879877 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.879983 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.880007 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:45 crc kubenswrapper[5121]: I0218 00:09:45.880022 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:45Z","lastTransitionTime":"2026-02-18T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896296 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896439 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.896473 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:45 crc kubenswrapper[5121]: E0218 00:09:45.997492 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.098691 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.199264 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.300454 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.400937 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.501225 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.601467 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.701574 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.802727 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:46 crc kubenswrapper[5121]: E0218 00:09:46.903664 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.004138 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.105034 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.205827 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.306283 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.326686 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.406766 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.507024 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.607431 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.707763 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.808281 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:47 crc kubenswrapper[5121]: E0218 00:09:47.908979 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.010228 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.111108 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.212167 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.313139 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.413468 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.514616 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.615378 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.715934 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.816227 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:48 crc kubenswrapper[5121]: E0218 00:09:48.916596 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.017723 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.118677 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.219085 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.319499 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.419989 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.521207 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.621862 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.722449 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.823378 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:49 crc kubenswrapper[5121]: E0218 00:09:49.923883 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.024210 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.125063 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.225517 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.326495 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.427498 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.527699 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.628635 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.729167 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.830101 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:50 crc kubenswrapper[5121]: E0218 00:09:50.931087 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.032271 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.133431 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.234449 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.334775 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.435931 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.536229 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.637375 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.738420 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.839556 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:51 crc kubenswrapper[5121]: E0218 00:09:51.940354 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.041099 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.141361 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.242166 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.342702 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.443726 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.544940 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.645686 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.746792 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.847433 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:52 crc kubenswrapper[5121]: E0218 00:09:52.947788 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.048349 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.149339 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.250488 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.270587 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271614 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271692 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.271708 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.272232 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.272529 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.351252 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.452369 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.552730 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.605082 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.606873 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174"} Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.607116 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608025 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608193 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:53 crc kubenswrapper[5121]: I0218 00:09:53.608267 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.609834 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.653348 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.754338 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.854976 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:53 crc kubenswrapper[5121]: E0218 00:09:53.955409 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.055758 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.156277 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.257038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.357181 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.457450 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.558025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.659081 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.760200 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.861253 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:54 crc kubenswrapper[5121]: E0218 00:09:54.962421 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.063146 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.163466 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.264317 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.364808 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.465205 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.565999 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.615511 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.616323 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618162 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" exitCode=255 Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618249 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174"} Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618324 5121 scope.go:117] "RemoveContainer" containerID="c03fb1e653923bf0fbd22dfd3f715eb9f8e90d5a11c25cf5b90171cd19989a6b" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.618615 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619468 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.619767 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.622543 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:09:55 crc kubenswrapper[5121]: I0218 00:09:55.623107 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.623601 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.666895 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.767113 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.867391 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:55 crc kubenswrapper[5121]: E0218 00:09:55.968585 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.069330 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.162342 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171880 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171904 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.171954 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.190186 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204368 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204449 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204469 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204498 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.204518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.222789 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234730 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234865 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.234884 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.249885 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260532 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260592 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260605 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260622 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.260636 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:09:56Z","lastTransitionTime":"2026-02-18T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273618 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273827 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.273857 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.374835 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.475814 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.576008 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: I0218 00:09:56.622848 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.677123 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.778047 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.879806 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:56 crc kubenswrapper[5121]: E0218 00:09:56.982056 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.082890 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.183062 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.284057 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.327793 5121 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.384528 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.485121 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.586355 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.687282 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.787918 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.888535 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:57 crc kubenswrapper[5121]: E0218 00:09:57.989379 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.090227 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.190727 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.290888 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.391025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.492181 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.592720 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.693848 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.794078 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.894439 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:58 crc kubenswrapper[5121]: E0218 00:09:58.994754 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.095765 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.196492 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.296747 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.397234 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.497428 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.598048 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.698851 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.799547 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:09:59 crc kubenswrapper[5121]: E0218 00:09:59.900025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.000149 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.100302 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.201295 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.301947 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.403128 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.503324 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.604548 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.705216 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.806087 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:00 crc kubenswrapper[5121]: E0218 00:10:00.907038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.008192 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.109354 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.210524 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.311267 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.411931 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.512641 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.613266 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.714349 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.816038 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:01 crc kubenswrapper[5121]: E0218 00:10:01.916706 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.017885 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.118047 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.219285 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.270217 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:02 crc kubenswrapper[5121]: I0218 00:10:02.271718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.272223 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.320333 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.420941 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.521212 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.621316 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.721900 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.822985 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:02 crc kubenswrapper[5121]: E0218 00:10:02.924211 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.024858 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.125739 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.225919 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.327066 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.427638 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.528473 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.607529 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.607958 5121 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609514 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609573 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.609597 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.610209 5121 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.610532 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.610810 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.628731 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: E0218 00:10:03.730025 5121 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.801669 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.832895 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833220 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833350 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833480 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.833614 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:03Z","lastTransitionTime":"2026-02-18T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.874157 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.889484 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936870 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936882 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.936913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:03Z","lastTransitionTime":"2026-02-18T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:03 crc kubenswrapper[5121]: I0218 00:10:03.990006 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.039857 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.039978 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040009 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040042 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.040067 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.089740 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143736 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143809 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143864 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.143892 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.169347 5121 apiserver.go:52] "Watching apiserver" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.180492 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.183048 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tqxjt","openshift-multus/multus-additional-cni-plugins-n2m5r","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-image-registry/node-ca-vsc9f","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-machine-config-operator/machine-config-daemon-ss65g","openshift-network-operator/iptables-alerter-5jnd7","openshift-multus/multus-9dxsb","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-7tprw","openshift-etcd/etcd-crc","openshift-multus/network-metrics-daemon-mlvtl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.185030 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.186505 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.186699 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.187451 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.187749 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.187815 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.188860 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.189212 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.190542 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.190637 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.192799 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.192932 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.194022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.195453 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.197151 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.198309 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.198831 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.199301 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.199533 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.200377 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208389 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208729 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.208915 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.210996 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.212358 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.215519 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218392 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218467 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218393 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.218868 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.219128 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.220066 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.220302 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.223884 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.223967 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.224381 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.224855 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225694 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225718 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.225880 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.226225 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.227833 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.228057 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.228402 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.230843 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.232415 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.233769 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.234517 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.236679 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.236951 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237021 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237521 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.237846 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240143 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240426 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.240982 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.241379 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.242250 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.242713 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.243341 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.244043 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.245100 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.245858 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246855 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246938 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.246966 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.247103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.247184 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.262153 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.276216 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.277270 5121 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.288908 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.291174 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.305134 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309698 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309771 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.309825 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310353 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310522 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310723 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311008 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311087 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311204 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311282 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311427 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311543 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311779 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311885 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312233 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312349 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312548 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312663 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312756 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312836 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312980 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313067 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313460 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313550 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313680 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313761 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313908 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313990 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314072 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314146 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314214 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314286 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314363 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314917 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315061 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315152 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315760 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315869 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316042 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316125 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316197 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316269 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316347 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316497 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316572 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316664 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316754 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316847 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316990 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317138 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317210 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317282 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317357 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317508 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317579 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317675 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317784 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317895 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318007 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318082 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318155 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318247 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.310797 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311387 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.311996 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312375 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.312778 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313147 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313328 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313362 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313394 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313710 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313909 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.313894 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314095 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314638 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314812 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314968 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.314788 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315052 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315173 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315201 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315613 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315721 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315757 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.315894 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316440 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316486 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316937 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.316548 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317010 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317162 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317292 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317342 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317807 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.317971 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.318013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319039 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319110 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319552 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.319968 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320085 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320360 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320420 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320475 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320529 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320581 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320866 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320932 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.320991 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321045 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321108 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321200 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321275 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321258 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321418 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321577 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321707 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321789 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321848 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321905 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.321959 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322016 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322074 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322129 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322191 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322246 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322304 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322382 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322448 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322555 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322606 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322689 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322846 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322906 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.322950 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323026 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323078 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323129 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323309 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323367 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323425 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323462 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323498 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323538 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323574 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323613 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323669 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323703 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323637 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323862 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323901 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323837 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323931 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323905 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.323962 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.324128 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.324213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325189 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325258 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325413 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325527 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325533 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325567 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.325571 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326104 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326139 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326140 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326267 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326317 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326327 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326381 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326486 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326527 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326495 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326564 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326598 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326627 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326686 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326722 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326753 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326786 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326810 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326846 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326874 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326904 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326928 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326953 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326981 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327007 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327180 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327211 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327238 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327275 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327302 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327328 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327370 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327395 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328997 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326559 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335153 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326571 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326641 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326794 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327103 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327447 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.327465 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.827407027 +0000 UTC m=+88.341864762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335397 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.327694 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328097 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336384 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328336 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328344 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328553 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328563 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328724 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336487 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.328878 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329113 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329245 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329455 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329684 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336584 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.329988 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330016 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330632 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330635 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330122 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330812 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330846 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331440 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331562 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.331754 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332169 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332411 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332457 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332605 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332713 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332997 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.332922 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333036 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333034 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.333512 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334367 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334424 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334784 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334806 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.334887 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335103 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335805 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.326148 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335826 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.335971 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336254 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336479 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.330064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337683 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337746 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.336055 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337783 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337844 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337886 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337918 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337942 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337970 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.337999 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338026 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338066 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338580 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338622 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338677 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338746 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.338758 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339221 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339244 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339257 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339419 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339482 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339348 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339541 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339700 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339778 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339789 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339821 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339947 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.339979 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340034 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340068 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340092 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340141 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340167 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340163 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340175 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340194 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340233 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340266 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340281 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340320 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340364 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340400 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340431 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340466 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340551 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340582 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340622 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340676 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340708 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340745 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340781 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.340773 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341179 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341245 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341295 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341339 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341388 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341445 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341481 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341520 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341563 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341610 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341643 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341779 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341837 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341876 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341913 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341956 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341996 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342022 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342057 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342083 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342105 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342137 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342165 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342189 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342215 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342241 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342264 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342414 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342497 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342524 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342555 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342582 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342606 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342629 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342690 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342721 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342748 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342770 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342850 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341187 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341987 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.341991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342038 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342473 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342688 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342716 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.343947 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.344183 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.344209 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.342872 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.345561 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346152 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346198 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346558 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.346895 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347138 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347221 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347227 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347186 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347299 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347387 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347601 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347772 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.347902 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.348044 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.848015957 +0000 UTC m=+88.362473692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348051 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.347808 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348080 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348130 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348176 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348210 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348239 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348333 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348383 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348418 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348539 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348539 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348634 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.348907 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349308 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349450 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349533 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349617 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349853 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.349944 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350073 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350144 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350217 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350248 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350292 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350419 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350460 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350481 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350709 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350743 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350837 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.350951 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.350878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.351102 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.851074438 +0000 UTC m=+88.365532173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351167 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351262 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351386 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351429 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351522 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351566 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351619 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351739 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351790 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351517 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.351889 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352613 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352669 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352638 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352698 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352741 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352479 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352795 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352798 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352858 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352113 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352902 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352521 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352966 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353190 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353211 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353227 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353242 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353260 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353275 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353292 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353309 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353324 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353340 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353355 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353374 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353386 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353402 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353416 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353434 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353450 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353466 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353483 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353501 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353518 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353530 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353546 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353558 5121 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353570 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353582 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353598 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353614 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353630 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353643 5121 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353705 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353719 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353733 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353752 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353768 5121 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353782 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353796 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353814 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353829 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353843 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353855 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353872 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353886 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353820 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353902 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353918 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353430 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353936 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.352841 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354012 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354037 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354060 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.353967 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354140 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354215 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354237 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354291 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355064 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.354348 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355114 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355149 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355186 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355213 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355255 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355273 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355296 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355311 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355324 5121 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355350 5121 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355387 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355406 5121 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355422 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355443 5121 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355458 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355471 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355486 5121 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355503 5121 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355486 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355516 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.356997 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357031 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357046 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357142 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357593 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358322 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.357361 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.358751 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.355203 5121 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.362474 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.364306 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.364837 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.365438 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.366194 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.368063 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371389 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371425 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371440 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371451 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371465 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371479 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371494 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371511 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371522 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371534 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371545 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371557 5121 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371567 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371578 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371589 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371601 5121 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371612 5121 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371622 5121 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371632 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371643 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371668 5121 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371678 5121 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371688 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371697 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371707 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371717 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371727 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371736 5121 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371745 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371754 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371765 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371775 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371785 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371797 5121 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371808 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371818 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371830 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371839 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371848 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371857 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371866 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371877 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371886 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371895 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371904 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371915 5121 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371923 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371934 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371946 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371985 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.371997 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372007 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372017 5121 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372026 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372037 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372047 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372057 5121 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372067 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372076 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372085 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372094 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372104 5121 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372114 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372124 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372133 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372143 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372152 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372163 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372173 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372182 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372190 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372200 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372209 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372219 5121 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372227 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372235 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372244 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372252 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372260 5121 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372270 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372279 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372289 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372299 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372309 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372318 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372326 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372337 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372345 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372354 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372365 5121 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372377 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372389 5121 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372399 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372407 5121 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372415 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372423 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372431 5121 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372441 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372451 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372461 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372472 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372481 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.372490 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373686 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373707 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373719 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.373793 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.873772944 +0000 UTC m=+88.388230679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375166 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375281 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375377 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.375571 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.87553448 +0000 UTC m=+88.389992405 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.375810 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.376202 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377162 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377630 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.377830 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378231 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378264 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378337 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378372 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378406 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.378854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.379201 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.379515 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.380356 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.380381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.381568 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.382029 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.382701 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.383315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.383526 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384374 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384686 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384722 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384700 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.384979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385189 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385251 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385470 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.385848 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.389736 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.394425 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.395742 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.400912 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.405229 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.408839 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.409978 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.419259 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.430991 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.444097 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.456104 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457665 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457727 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457744 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.457757 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.472143 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473153 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473200 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473220 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473241 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473260 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473277 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473296 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473334 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473349 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.473367 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.473455 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:04.973427492 +0000 UTC m=+88.487885217 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473540 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-hostroot\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473589 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9afb2de0-1fd9-4548-b02d-ba81525f51c8-host\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473372 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473678 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473708 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-netns\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473715 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473770 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473791 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473835 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473879 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473886 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473916 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473945 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473969 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.473990 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474008 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474132 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-tmp-dir\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474215 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474238 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-system-cni-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474289 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474313 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474355 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-cnibin\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474481 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474531 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.474915 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ce10664c-304a-460f-819a-bf71f3517fb3-rootfs\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475485 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-daemon-config\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475469 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475606 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475681 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475700 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9afb2de0-1fd9-4548-b02d-ba81525f51c8-serviceca\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475760 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475797 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475842 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475923 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475941 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.475966 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476025 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-os-release\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476064 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476111 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-os-release\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476110 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-socket-dir-parent\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476142 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476194 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-bin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476412 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476438 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476457 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476487 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476558 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476575 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476597 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476617 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476637 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476695 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-cni-multus\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-var-lib-kubelet\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476727 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-system-cni-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476897 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5bc15fae-a0c0-4032-b673-383e603fe393-tuning-conf-dir\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476904 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.476976 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477336 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477720 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477737 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477811 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477762 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-multus-certs\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477840 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477839 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477873 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477905 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-etc-kubernetes\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477987 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.477986 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-hosts-file\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478066 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478088 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478124 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478197 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478278 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478291 5121 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478303 5121 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478314 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478324 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478335 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478345 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478349 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ce10664c-304a-460f-819a-bf71f3517fb3-mcd-auth-proxy-config\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478356 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478385 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478390 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478416 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478425 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478435 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478445 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478454 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478463 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478473 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478483 5121 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478493 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478504 5121 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478513 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478522 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478531 5121 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478540 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478550 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478558 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478567 5121 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478579 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478588 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478599 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478610 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478619 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478629 5121 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478638 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478667 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478678 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478688 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478699 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478710 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478721 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478723 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-host-run-k8s-cni-cncf-io\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478733 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478794 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478817 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478829 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478837 5121 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478864 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478879 5121 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478890 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478901 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478911 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478925 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478938 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.478983 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-multus-conf-dir\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479001 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cnibin\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479011 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.479622 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.480042 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5bc15fae-a0c0-4032-b673-383e603fe393-cni-binary-copy\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.480470 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.481052 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-cni-binary-copy\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.481877 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ce10664c-304a-460f-819a-bf71f3517fb3-proxy-tls\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.482834 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.485188 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.489415 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.489914 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.490878 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6z5xr\" (UniqueName: \"kubernetes.io/projected/ce10664c-304a-460f-819a-bf71f3517fb3-kube-api-access-6z5xr\") pod \"machine-config-daemon-ss65g\" (UID: \"ce10664c-304a-460f-819a-bf71f3517fb3\") " pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.493436 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"ovnkube-control-plane-57b78d8988-rfj5g\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.493937 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6psrx\" (UniqueName: \"kubernetes.io/projected/51dcc4ed-63a2-4a92-936e-8ef22eca20d6-kube-api-access-6psrx\") pod \"multus-9dxsb\" (UID: \"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\") " pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.497335 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx5wk\" (UniqueName: \"kubernetes.io/projected/9afb2de0-1fd9-4548-b02d-ba81525f51c8-kube-api-access-lx5wk\") pod \"node-ca-vsc9f\" (UID: \"9afb2de0-1fd9-4548-b02d-ba81525f51c8\") " pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.498802 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plr9k\" (UniqueName: \"kubernetes.io/projected/5bc15fae-a0c0-4032-b673-383e603fe393-kube-api-access-plr9k\") pod \"multus-additional-cni-plugins-n2m5r\" (UID: \"5bc15fae-a0c0-4032-b673-383e603fe393\") " pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.498982 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.499946 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"ovnkube-node-7tprw\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.500919 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdmp\" (UniqueName: \"kubernetes.io/projected/5b49811f-e44a-43e9-80e6-15fcc9ed145f-kube-api-access-swdmp\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.501532 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8wqj\" (UniqueName: \"kubernetes.io/projected/b47fedd5-33a0-43c1-9e5d-c31c88d07fb8-kube-api-access-q8wqj\") pod \"node-resolver-tqxjt\" (UID: \"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\") " pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.512247 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.514336 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.517888 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: source /etc/kubernetes/apiserver-url.env Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.519086 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.526131 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.527512 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801 WatchSource:0}: Error finding container 0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801: Status 404 returned error can't find the container with id 0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801 Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.529331 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.535062 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 18 00:10:04 crc kubenswrapper[5121]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 18 00:10:04 crc kubenswrapper[5121]: ho_enable="--enable-hybrid-overlay" Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 18 00:10:04 crc kubenswrapper[5121]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 18 00:10:04 crc kubenswrapper[5121]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-host=127.0.0.1 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-port=9743 \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ho_enable} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-approver \ Feb 18 00:10:04 crc kubenswrapper[5121]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --wait-for-kubernetes-api=200s \ Feb 18 00:10:04 crc kubenswrapper[5121]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.536121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-vsc9f" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.540253 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-webhook \ Feb 18 00:10:04 crc kubenswrapper[5121]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.540860 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156 WatchSource:0}: Error finding container e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156: Status 404 returned error can't find the container with id e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.541474 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.542763 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.543543 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.546795 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.550935 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9dxsb" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.552049 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.558965 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tqxjt" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.559397 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561661 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561922 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.561982 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.562058 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: while [ true ]; Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: for f in $(ls /tmp/serviceca); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $f Feb 18 00:10:04 crc kubenswrapper[5121]: ca_file_path="/tmp/serviceca/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ -e "${reg_dir_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: mkdir $reg_dir_path Feb 18 00:10:04 crc kubenswrapper[5121]: cp $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: for d in $(ls /etc/docker/certs.d); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $d Feb 18 00:10:04 crc kubenswrapper[5121]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_conf_path="/tmp/serviceca/${dp}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ ! -e "${reg_conf_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: rm -rf /etc/docker/certs.d/$d Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait ${!} Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsc9f_openshift-image-registry(9afb2de0-1fd9-4548-b02d-ba81525f51c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.563189 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsc9f" podUID="9afb2de0-1fd9-4548-b02d-ba81525f51c8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.565865 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.571413 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce10664c_304a_460f_819a_bf71f3517fb3.slice/crio-176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b WatchSource:0}: Error finding container 176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b: Status 404 returned error can't find the container with id 176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.573226 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51dcc4ed_63a2_4a92_936e_8ef22eca20d6.slice/crio-6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33 WatchSource:0}: Error finding container 6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33: Status 404 returned error can't find the container with id 6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33 Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.575151 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.578315 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.578802 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 18 00:10:04 crc kubenswrapper[5121]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 18 00:10:04 crc kubenswrapper[5121]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6psrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9dxsb_openshift-multus(51dcc4ed-63a2-4a92-936e-8ef22eca20d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.579216 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.580095 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9dxsb" podUID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.582897 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.588213 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.589402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.590491 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.592431 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb47fedd5_33a0_43c1_9e5d_c31c88d07fb8.slice/crio-c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601 WatchSource:0}: Error finding container c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601: Status 404 returned error can't find the container with id c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.596467 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -uo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: HOSTS_FILE="/etc/hosts" Feb 18 00:10:04 crc kubenswrapper[5121]: TEMP_FILE="/tmp/hosts.tmp" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Make a temporary file with the old hosts file's attributes. Feb 18 00:10:04 crc kubenswrapper[5121]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Failed to preserve hosts file. Exiting." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: while true; do Feb 18 00:10:04 crc kubenswrapper[5121]: declare -A svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${services[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: # Fetch service IP from cluster dns if present. We make several tries Feb 18 00:10:04 crc kubenswrapper[5121]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 18 00:10:04 crc kubenswrapper[5121]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 18 00:10:04 crc kubenswrapper[5121]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 18 00:10:04 crc kubenswrapper[5121]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 18 00:10:04 crc kubenswrapper[5121]: for i in ${!cmds[*]} Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: ips=($(eval "${cmds[i]}")) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: svc_ips["${svc}"]="${ips[@]}" Feb 18 00:10:04 crc kubenswrapper[5121]: break Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Update /etc/hosts only if we get valid service IPs Feb 18 00:10:04 crc kubenswrapper[5121]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 18 00:10:04 crc kubenswrapper[5121]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 18 00:10:04 crc kubenswrapper[5121]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Append resolver entries for services Feb 18 00:10:04 crc kubenswrapper[5121]: rc=0 Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${!svc_ips[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: for ip in ${svc_ips[${svc}]}; do Feb 18 00:10:04 crc kubenswrapper[5121]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ $rc -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 18 00:10:04 crc kubenswrapper[5121]: # Replace /etc/hosts with our modified version if needed Feb 18 00:10:04 crc kubenswrapper[5121]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 18 00:10:04 crc kubenswrapper[5121]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: unset svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8wqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-tqxjt_openshift-dns(b47fedd5-33a0-43c1-9e5d-c31c88d07fb8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.596969 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ec6f87b_86e0_4893_9709_9dc7381bc95a.slice/crio-8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047 WatchSource:0}: Error finding container 8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047: Status 404 returned error can't find the container with id 8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.597620 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-tqxjt" podUID="b47fedd5-33a0-43c1-9e5d-c31c88d07fb8" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.600719 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 18 00:10:04 crc kubenswrapper[5121]: apiVersion: v1 Feb 18 00:10:04 crc kubenswrapper[5121]: clusters: Feb 18 00:10:04 crc kubenswrapper[5121]: - cluster: Feb 18 00:10:04 crc kubenswrapper[5121]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: server: https://api-int.crc.testing:6443 Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: contexts: Feb 18 00:10:04 crc kubenswrapper[5121]: - context: Feb 18 00:10:04 crc kubenswrapper[5121]: cluster: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: namespace: default Feb 18 00:10:04 crc kubenswrapper[5121]: user: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: current-context: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: kind: Config Feb 18 00:10:04 crc kubenswrapper[5121]: preferences: {} Feb 18 00:10:04 crc kubenswrapper[5121]: users: Feb 18 00:10:04 crc kubenswrapper[5121]: - name: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: user: Feb 18 00:10:04 crc kubenswrapper[5121]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: EOF Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfl5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-7tprw_openshift-ovn-kubernetes(0ec6f87b-86e0-4893-9709-9dc7381bc95a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.601861 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.602429 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.607403 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5bc15fae_a0c0_4032_b673_383e603fe393.slice/crio-656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949 WatchSource:0}: Error finding container 656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949: Status 404 returned error can't find the container with id 656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949 Feb 18 00:10:04 crc kubenswrapper[5121]: W0218 00:10:04.608182 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9cd074_60f6_4754_9ef8_567f9274e384.slice/crio-3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540 WatchSource:0}: Error finding container 3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540: Status 404 returned error can't find the container with id 3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540 Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.610099 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-n2m5r_openshift-multus(5bc15fae-a0c0-4032-b673-383e603fe393): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.611485 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podUID="5bc15fae-a0c0-4032-b673-383e603fe393" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.612445 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -euo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 18 00:10:04 crc kubenswrapper[5121]: # As the secret mount is optional we must wait for the files to be present. Feb 18 00:10:04 crc kubenswrapper[5121]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 18 00:10:04 crc kubenswrapper[5121]: TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=0 Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs(){ Feb 18 00:10:04 crc kubenswrapper[5121]: CUR_TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: } Feb 18 00:10:04 crc kubenswrapper[5121]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 5 Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/kube-rbac-proxy \ Feb 18 00:10:04 crc kubenswrapper[5121]: --logtostderr \ Feb 18 00:10:04 crc kubenswrapper[5121]: --secure-listen-address=:9108 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --upstream=http://127.0.0.1:29108/ \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-private-key-file=${TLS_PK} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cert-file=${TLS_CERT} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.612461 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.615799 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # This is needed so that converting clusters from GA to TP Feb 18 00:10:04 crc kubenswrapper[5121]: # will rollout control plane pods as well Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" != "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable multi-network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable admin network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: if [ "shared" == "shared" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode shared" Feb 18 00:10:04 crc kubenswrapper[5121]: elif [ "shared" == "local" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode local" Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --init-cluster-manager "${K8S_NODE}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-bind-address "127.0.0.1:29108" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-pprof \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-config-duration \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${dns_name_resolver_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${persistent_ips_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${network_segmentation_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${gateway_mode_flags} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${route_advertisements_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${preconfigured_udn_addresses_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-ip=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-firewall=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-qos=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-service=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multicast \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multi-external-gateway=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_policy_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${admin_network_policy_enabled_flag} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.617139 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.626308 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.646342 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"656dc9c894b7a3962103162855e44d385425b7c1e696bcb4f141d9cadf296949"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.647542 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"6c9663a28b02b862fe76e092f19423657ac232b890a6bb56d739ee25fdabef33"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.648951 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"176559b5ae38f0c153aa93b7c34b09cb8b9bb641bcee610293f3a12ff1bdd87b"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.649995 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 18 00:10:04 crc kubenswrapper[5121]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 18 00:10:04 crc kubenswrapper[5121]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6psrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-9dxsb_openshift-multus(51dcc4ed-63a2-4a92-936e-8ef22eca20d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.650433 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.650459 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.651109 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-plr9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-n2m5r_openshift-multus(5bc15fae-a0c0-4032-b673-383e603fe393): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.651192 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-9dxsb" podUID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.651578 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.652343 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podUID="5bc15fae-a0c0-4032-b673-383e603fe393" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.652958 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 18 00:10:04 crc kubenswrapper[5121]: apiVersion: v1 Feb 18 00:10:04 crc kubenswrapper[5121]: clusters: Feb 18 00:10:04 crc kubenswrapper[5121]: - cluster: Feb 18 00:10:04 crc kubenswrapper[5121]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: server: https://api-int.crc.testing:6443 Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: contexts: Feb 18 00:10:04 crc kubenswrapper[5121]: - context: Feb 18 00:10:04 crc kubenswrapper[5121]: cluster: default-cluster Feb 18 00:10:04 crc kubenswrapper[5121]: namespace: default Feb 18 00:10:04 crc kubenswrapper[5121]: user: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: name: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: current-context: default-context Feb 18 00:10:04 crc kubenswrapper[5121]: kind: Config Feb 18 00:10:04 crc kubenswrapper[5121]: preferences: {} Feb 18 00:10:04 crc kubenswrapper[5121]: users: Feb 18 00:10:04 crc kubenswrapper[5121]: - name: default-auth Feb 18 00:10:04 crc kubenswrapper[5121]: user: Feb 18 00:10:04 crc kubenswrapper[5121]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 18 00:10:04 crc kubenswrapper[5121]: EOF Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfl5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-7tprw_openshift-ovn-kubernetes(0ec6f87b-86e0-4893-9709-9dc7381bc95a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.653124 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6z5xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ss65g_openshift-machine-config-operator(ce10664c-304a-460f-819a-bf71f3517fb3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.653459 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -euo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 18 00:10:04 crc kubenswrapper[5121]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 18 00:10:04 crc kubenswrapper[5121]: # As the secret mount is optional we must wait for the files to be present. Feb 18 00:10:04 crc kubenswrapper[5121]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 18 00:10:04 crc kubenswrapper[5121]: TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=0 Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs(){ Feb 18 00:10:04 crc kubenswrapper[5121]: CUR_TS=$(date +%s) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 18 00:10:04 crc kubenswrapper[5121]: HAS_LOGGED_INFO=1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: } Feb 18 00:10:04 crc kubenswrapper[5121]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 18 00:10:04 crc kubenswrapper[5121]: log_missing_certs Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 5 Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/kube-rbac-proxy \ Feb 18 00:10:04 crc kubenswrapper[5121]: --logtostderr \ Feb 18 00:10:04 crc kubenswrapper[5121]: --secure-listen-address=:9108 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --upstream=http://127.0.0.1:29108/ \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-private-key-file=${TLS_PK} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --tls-cert-file=${TLS_CERT} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.654369 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.654402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.654762 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqxjt" event={"ID":"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8","Type":"ContainerStarted","Data":"c260aed918a0fbbb1044a7b8402ed952d0e35ff7f5dc12723572ff04050e9601"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.655940 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "" != "" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # This is needed so that converting clusters from GA to TP Feb 18 00:10:04 crc kubenswrapper[5121]: # will rollout control plane pods as well Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" != "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_enabled_flag="--enable-multi-network" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable multi-network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "false" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Enable admin network policy if configured (control-plane always full mode) Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag= Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "true" == "true" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: if [ "shared" == "shared" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode shared" Feb 18 00:10:04 crc kubenswrapper[5121]: elif [ "shared" == "local" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: gateway_mode_flags="--gateway-mode local" Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --init-cluster-manager "${K8S_NODE}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-bind-address "127.0.0.1:29108" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-pprof \ Feb 18 00:10:04 crc kubenswrapper[5121]: --metrics-enable-config-duration \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_join_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${dns_name_resolver_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${persistent_ips_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${network_segmentation_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${gateway_mode_flags} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${route_advertisements_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${preconfigured_udn_addresses_enable_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-ip=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-firewall=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-qos=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-egress-service=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multicast \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-multi-external-gateway=true \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${multi_network_policy_enabled_flag} \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${admin_network_policy_enabled_flag} Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rmw8r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-rfj5g_openshift-ovn-kubernetes(aa9cd074-60f6-4754-9ef8-567f9274e384): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.656624 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -uo pipefail Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 18 00:10:04 crc kubenswrapper[5121]: HOSTS_FILE="/etc/hosts" Feb 18 00:10:04 crc kubenswrapper[5121]: TEMP_FILE="/tmp/hosts.tmp" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Make a temporary file with the old hosts file's attributes. Feb 18 00:10:04 crc kubenswrapper[5121]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Failed to preserve hosts file. Exiting." Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: while true; do Feb 18 00:10:04 crc kubenswrapper[5121]: declare -A svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${services[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: # Fetch service IP from cluster dns if present. We make several tries Feb 18 00:10:04 crc kubenswrapper[5121]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 18 00:10:04 crc kubenswrapper[5121]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 18 00:10:04 crc kubenswrapper[5121]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 18 00:10:04 crc kubenswrapper[5121]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 18 00:10:04 crc kubenswrapper[5121]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 18 00:10:04 crc kubenswrapper[5121]: for i in ${!cmds[*]} Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: ips=($(eval "${cmds[i]}")) Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: svc_ips["${svc}"]="${ips[@]}" Feb 18 00:10:04 crc kubenswrapper[5121]: break Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Update /etc/hosts only if we get valid service IPs Feb 18 00:10:04 crc kubenswrapper[5121]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 18 00:10:04 crc kubenswrapper[5121]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 18 00:10:04 crc kubenswrapper[5121]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 18 00:10:04 crc kubenswrapper[5121]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # Append resolver entries for services Feb 18 00:10:04 crc kubenswrapper[5121]: rc=0 Feb 18 00:10:04 crc kubenswrapper[5121]: for svc in "${!svc_ips[@]}"; do Feb 18 00:10:04 crc kubenswrapper[5121]: for ip in ${svc_ips[${svc}]}; do Feb 18 00:10:04 crc kubenswrapper[5121]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ $rc -ne 0 ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: continue Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 18 00:10:04 crc kubenswrapper[5121]: # Replace /etc/hosts with our modified version if needed Feb 18 00:10:04 crc kubenswrapper[5121]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 18 00:10:04 crc kubenswrapper[5121]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait Feb 18 00:10:04 crc kubenswrapper[5121]: unset svc_ips Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8wqj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-tqxjt_openshift-dns(b47fedd5-33a0-43c1-9e5d-c31c88d07fb8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.656824 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsc9f" event={"ID":"9afb2de0-1fd9-4548-b02d-ba81525f51c8","Type":"ContainerStarted","Data":"98a363ced3134374ccc1e6a70830a1969dac263587609cb7047c0bddad1bd9be"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.657055 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.657696 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-tqxjt" podUID="b47fedd5-33a0-43c1-9e5d-c31c88d07fb8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.658967 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"e85d8c754023f5abe3422626ed04f37f2d27dc757d11d9577fb31404bb16f156"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.659259 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 18 00:10:04 crc kubenswrapper[5121]: while [ true ]; Feb 18 00:10:04 crc kubenswrapper[5121]: do Feb 18 00:10:04 crc kubenswrapper[5121]: for f in $(ls /tmp/serviceca); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $f Feb 18 00:10:04 crc kubenswrapper[5121]: ca_file_path="/tmp/serviceca/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ -e "${reg_dir_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: mkdir $reg_dir_path Feb 18 00:10:04 crc kubenswrapper[5121]: cp $ca_file_path $reg_dir_path/ca.crt Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: for d in $(ls /etc/docker/certs.d); do Feb 18 00:10:04 crc kubenswrapper[5121]: echo $d Feb 18 00:10:04 crc kubenswrapper[5121]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 18 00:10:04 crc kubenswrapper[5121]: reg_conf_path="/tmp/serviceca/${dp}" Feb 18 00:10:04 crc kubenswrapper[5121]: if [ ! -e "${reg_conf_path}" ]; then Feb 18 00:10:04 crc kubenswrapper[5121]: rm -rf /etc/docker/certs.d/$d Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: sleep 60 & wait ${!} Feb 18 00:10:04 crc kubenswrapper[5121]: done Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5wk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-vsc9f_openshift-image-registry(9afb2de0-1fd9-4548-b02d-ba81525f51c8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.658959 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.660397 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"0d2396a350fe2a9d7e1d3de27ad7aad30ef27af5204be6710e85de95e9209801"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.660619 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-vsc9f" podUID="9afb2de0-1fd9-4548-b02d-ba81525f51c8" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.660630 5121 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.662058 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.662176 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"142f908f3b7f173342d28521f70a27a943663aa51661d2dadfa6626fc9f5086e"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.663228 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 18 00:10:04 crc kubenswrapper[5121]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 18 00:10:04 crc kubenswrapper[5121]: ho_enable="--enable-hybrid-overlay" Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 18 00:10:04 crc kubenswrapper[5121]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 18 00:10:04 crc kubenswrapper[5121]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-host=127.0.0.1 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --webhook-port=9743 \ Feb 18 00:10:04 crc kubenswrapper[5121]: ${ho_enable} \ Feb 18 00:10:04 crc kubenswrapper[5121]: --enable-interconnect \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-approver \ Feb 18 00:10:04 crc kubenswrapper[5121]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --wait-for-kubernetes-api=200s \ Feb 18 00:10:04 crc kubenswrapper[5121]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664277 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664336 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664367 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664391 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.664406 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.665596 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f "/env/_master" ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: source "/env/_master" Feb 18 00:10:04 crc kubenswrapper[5121]: set +o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: Feb 18 00:10:04 crc kubenswrapper[5121]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 18 00:10:04 crc kubenswrapper[5121]: --disable-webhook \ Feb 18 00:10:04 crc kubenswrapper[5121]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 18 00:10:04 crc kubenswrapper[5121]: --loglevel="${LOGLEVEL}" Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.667701 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.668120 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.669246 5121 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 18 00:10:04 crc kubenswrapper[5121]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 18 00:10:04 crc kubenswrapper[5121]: set -o allexport Feb 18 00:10:04 crc kubenswrapper[5121]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 18 00:10:04 crc kubenswrapper[5121]: source /etc/kubernetes/apiserver-url.env Feb 18 00:10:04 crc kubenswrapper[5121]: else Feb 18 00:10:04 crc kubenswrapper[5121]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 18 00:10:04 crc kubenswrapper[5121]: exit 1 Feb 18 00:10:04 crc kubenswrapper[5121]: fi Feb 18 00:10:04 crc kubenswrapper[5121]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 18 00:10:04 crc kubenswrapper[5121]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 18 00:10:04 crc kubenswrapper[5121]: > logger="UnhandledError" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.672318 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.680794 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.690097 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.702261 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.713763 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.723379 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.732720 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.742847 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.763233 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768384 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768619 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.768909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.769045 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.779067 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.799322 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.859716 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.873912 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874144 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874264 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874386 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.874518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884460 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884586 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884638 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.884737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884825 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884870 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884888 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884901 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884872 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884956 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884962 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884965 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.884939508 +0000 UTC m=+89.399397393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884995 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.884978289 +0000 UTC m=+89.399436024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.884992 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885013 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.88500654 +0000 UTC m=+89.399464275 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885166 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.885135683 +0000 UTC m=+89.399593458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.885348 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.885329768 +0000 UTC m=+89.399787543 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.892498 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.919021 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.959790 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976609 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976720 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976748 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.976768 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:04Z","lastTransitionTime":"2026-02-18T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:04 crc kubenswrapper[5121]: I0218 00:10:04.985313 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.985461 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:04 crc kubenswrapper[5121]: E0218 00:10:04.985523 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:05.985507191 +0000 UTC m=+89.499964926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.000240 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.045108 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080539 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080829 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.080859 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.083510 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.123249 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.164115 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183151 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183255 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183310 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.183330 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.199352 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.242669 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.270872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.270882 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.271058 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.271407 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.277942 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.279465 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.279815 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.282816 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285613 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285793 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.285818 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.286635 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.291988 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.297522 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.299939 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.301593 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.303080 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.305323 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.308024 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.312571 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.315231 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.319714 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.320273 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.322449 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.323714 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.326172 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.327931 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.330075 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.331684 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.335971 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.337326 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.338791 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.340216 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.342848 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.346222 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.348816 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.351745 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.355101 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.356060 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.358545 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.359540 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.362128 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.368386 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.373186 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.374834 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.376290 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.377587 5121 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.377768 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.383619 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.385985 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389225 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389276 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389288 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389309 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.389324 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.390111 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.391628 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.392934 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.395218 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.397210 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.397915 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.398344 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.399872 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.401557 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.403942 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.404989 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.406879 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.408079 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.409542 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.410891 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.413553 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.414548 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.416221 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.417597 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.443099 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.485372 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491588 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491736 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.491790 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.536524 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.563455 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.564461 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.564717 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.564801 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594503 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594556 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594589 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.594604 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.604508 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.641696 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.692720 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697512 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697620 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697678 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.697700 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.719721 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.763004 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800793 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.800827 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.801899 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.843893 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.894775 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895094 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895171 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.895289 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895461 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895548 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895524523 +0000 UTC m=+91.409982278 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895553 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895575 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895590 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895626 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895616576 +0000 UTC m=+91.410074311 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895721 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895763 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895753429 +0000 UTC m=+91.410211174 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895931 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.895982 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.895916304 +0000 UTC m=+91.410374109 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896004 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896067 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.896242 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.896219091 +0000 UTC m=+91.410676866 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904463 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904545 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904576 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.904638 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:05Z","lastTransitionTime":"2026-02-18T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:05 crc kubenswrapper[5121]: I0218 00:10:05.996506 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.996819 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:05 crc kubenswrapper[5121]: E0218 00:10:05.996962 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:07.996934328 +0000 UTC m=+91.511392063 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007487 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.007498 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110337 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110480 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110496 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.110526 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213417 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213475 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213504 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.213517 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.270567 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.270712 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.270899 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.271145 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316160 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316223 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316234 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.316263 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418725 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.418756 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.419909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.419991 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420004 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420022 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.420034 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.432753 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437275 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437331 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437350 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.437385 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.454347 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459601 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459614 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459635 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.459665 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.470710 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475852 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475952 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.475977 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.476006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.476034 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.492734 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497297 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.497418 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.510446 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:06 crc kubenswrapper[5121]: E0218 00:10:06.510599 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521146 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521175 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.521197 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623608 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623704 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623762 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.623784 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.727932 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728002 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728050 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.728066 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830716 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830809 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.830840 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933818 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:06 crc kubenswrapper[5121]: I0218 00:10:06.933830 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:06Z","lastTransitionTime":"2026-02-18T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036821 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036889 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036905 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.036946 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140312 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140373 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.140414 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.242942 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243059 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243089 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.243113 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.270343 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.270390 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.270621 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.270803 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.293301 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.305541 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.314896 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.327067 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345342 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345440 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345461 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.345513 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.349789 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.366634 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.382088 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.392787 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.404554 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.414042 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.425241 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.439554 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448171 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448245 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448258 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448279 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.448292 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.453384 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.470715 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.488494 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.517337 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.535875 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551317 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551342 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551377 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.551395 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.552426 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.564912 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654524 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654578 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.654598 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758360 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758424 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758442 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.758455 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861711 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861803 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861843 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.861858 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921386 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921528 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921667 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.921595035 +0000 UTC m=+95.436052770 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921730 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921753 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921766 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921783 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.921834 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921849 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.921826071 +0000 UTC m=+95.436283806 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921921 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.921983 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922006 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922061 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922075 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922028016 +0000 UTC m=+95.436485791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922126 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922116438 +0000 UTC m=+95.436574173 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922245 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: E0218 00:10:07.922316 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:11.922300313 +0000 UTC m=+95.436758058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966320 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:07 crc kubenswrapper[5121]: I0218 00:10:07.966439 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:07Z","lastTransitionTime":"2026-02-18T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.023448 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.023695 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.023844 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:12.023817421 +0000 UTC m=+95.538275156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069184 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069246 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069260 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069280 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.069295 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171846 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171889 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171916 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.171929 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.269792 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.269935 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.270445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:08 crc kubenswrapper[5121]: E0218 00:10:08.270506 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273960 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273981 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.273990 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.274001 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.274012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.334504 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376419 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376491 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376510 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376540 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.376562 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.394388 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.479911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480009 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480039 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.480097 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583267 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583356 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583370 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583393 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.583412 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.685953 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686017 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686028 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686044 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.686056 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788929 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.788966 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891754 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891790 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.891802 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994700 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994796 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994835 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:08 crc kubenswrapper[5121]: I0218 00:10:08.994855 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:08Z","lastTransitionTime":"2026-02-18T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097814 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097874 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097888 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097905 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.097917 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201087 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201142 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201152 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201170 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.201182 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.269997 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:09 crc kubenswrapper[5121]: E0218 00:10:09.270212 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.270594 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:09 crc kubenswrapper[5121]: E0218 00:10:09.270801 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303960 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.303981 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406394 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406805 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.406975 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.407128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.407297 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510160 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510185 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.510238 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613228 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613307 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613318 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613340 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.613352 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716431 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716545 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716575 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.716591 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819859 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.819990 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.820018 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.820036 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922536 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922612 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922685 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:09 crc kubenswrapper[5121]: I0218 00:10:09.922704 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:09Z","lastTransitionTime":"2026-02-18T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025018 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025120 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025145 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.025160 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128233 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128294 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128306 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.128339 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230678 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230758 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230773 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.230815 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.270175 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.270258 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:10 crc kubenswrapper[5121]: E0218 00:10:10.270402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:10 crc kubenswrapper[5121]: E0218 00:10:10.270619 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333355 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333438 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333462 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.333479 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436526 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.436551 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539718 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539827 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539860 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.539882 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642161 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642234 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642253 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642283 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.642304 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744414 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744515 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744547 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744579 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.744604 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847327 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847427 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.847479 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950674 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950738 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950777 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:10 crc kubenswrapper[5121]: I0218 00:10:10.950813 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:10Z","lastTransitionTime":"2026-02-18T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053578 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053675 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053693 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.053735 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156339 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156357 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156384 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.156404 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259502 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259569 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259591 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.259639 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.270225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.270402 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.270746 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.270925 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363499 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363610 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363684 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.363755 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467542 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467555 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.467586 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570828 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570914 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570976 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.570998 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673894 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673924 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.673935 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776626 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776725 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776745 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776768 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.776782 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879083 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.879147 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973511 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973778 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.973844 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.973797959 +0000 UTC m=+103.488255734 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973921 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.973953 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.973989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974059 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974033115 +0000 UTC m=+103.488490880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974107 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974190 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974225 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974122 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974318 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974364 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974392 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974364 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974316132 +0000 UTC m=+103.488773917 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974518 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974490718 +0000 UTC m=+103.488948573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:11 crc kubenswrapper[5121]: E0218 00:10:11.974548 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:19.974532319 +0000 UTC m=+103.488990204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.982955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983013 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983033 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983058 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:11 crc kubenswrapper[5121]: I0218 00:10:11.983077 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:11Z","lastTransitionTime":"2026-02-18T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.075561 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.075870 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.076028 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:20.075995105 +0000 UTC m=+103.590452880 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.085926 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086036 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086052 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.086096 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188850 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.188972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.189005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.189028 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.269983 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.270069 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.270181 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:12 crc kubenswrapper[5121]: E0218 00:10:12.270428 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291411 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291497 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291512 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291533 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.291547 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.393908 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.393999 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394020 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394048 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.394067 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496581 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496694 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.496732 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599387 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599469 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599495 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.599549 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702261 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702309 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702320 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702338 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.702348 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805357 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805408 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.805421 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908021 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908078 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908088 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908107 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:12 crc kubenswrapper[5121]: I0218 00:10:12.908121 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:12Z","lastTransitionTime":"2026-02-18T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010802 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010878 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010926 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.010947 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114566 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114654 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.114765 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.216965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217052 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217078 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217112 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.217138 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.270503 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:13 crc kubenswrapper[5121]: E0218 00:10:13.270714 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.270911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:13 crc kubenswrapper[5121]: E0218 00:10:13.271256 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319541 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319597 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319656 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.319738 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421817 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421954 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.421969 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524726 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524741 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524762 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.524774 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626801 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626871 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626883 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626901 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.626913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729120 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729185 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729198 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729218 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.729231 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832374 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832484 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832505 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.832525 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.935950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936034 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:13 crc kubenswrapper[5121]: I0218 00:10:13.936111 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:13Z","lastTransitionTime":"2026-02-18T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039244 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039260 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039284 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.039300 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142205 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142278 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142298 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.142312 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245727 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245787 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245817 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.245829 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.269749 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:14 crc kubenswrapper[5121]: E0218 00:10:14.269889 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.269931 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:14 crc kubenswrapper[5121]: E0218 00:10:14.270123 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347403 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347466 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.347496 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450259 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450313 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450323 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450337 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.450346 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.552897 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553042 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553063 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553089 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.553107 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656415 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656433 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656458 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.656480 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759388 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759449 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759489 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.759500 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862701 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862712 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.862743 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965869 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965941 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965963 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:14 crc kubenswrapper[5121]: I0218 00:10:14.965975 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:14Z","lastTransitionTime":"2026-02-18T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071731 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071781 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071794 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.071824 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174409 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174431 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.174444 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.269936 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.270276 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:15 crc kubenswrapper[5121]: E0218 00:10:15.270273 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:15 crc kubenswrapper[5121]: E0218 00:10:15.270579 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276038 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276090 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276123 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.276139 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378534 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378602 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.378723 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481871 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481964 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.481994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.482027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.482053 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584722 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584800 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584825 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584856 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.584881 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689113 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689403 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689417 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689439 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.689456 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.696474 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.719161 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.743370 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.758630 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.776211 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.790374 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792296 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792359 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792399 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.792416 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.818450 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.829897 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.843699 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.859771 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.878780 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.891242 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894858 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894909 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894923 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894951 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.894968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.906527 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.917014 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.929019 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.939364 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.953111 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.970358 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.982092 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.995075 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996670 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996719 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996740 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996764 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:15 crc kubenswrapper[5121]: I0218 00:10:15.996782 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:15Z","lastTransitionTime":"2026-02-18T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.012181 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099436 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099448 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099467 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.099480 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202457 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202533 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202580 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.202599 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.269938 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.270445 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.270840 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.271082 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305151 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305217 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305235 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305262 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.305280 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410860 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410917 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410931 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.410964 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512665 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512740 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512754 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512774 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.512785 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615066 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615306 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615422 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.615519 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.702001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.705812 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.705856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.709175 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" exitCode=0 Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.709204 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714330 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714423 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714511 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714579 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.714638 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.723455 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.727102 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738891 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738950 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.738975 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.754777 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760188 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760227 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760238 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.760264 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.762558 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.771967 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777696 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777743 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777755 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777775 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.777788 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.783156 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.794378 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799091 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799150 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799174 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799198 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799151 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.799217 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.808348 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.809875 5121 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400444Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861244Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"71477c84-568f-4f6d-8a8d-dd02a666cc72\\\",\\\"systemUUID\\\":\\\"48370276-1fd8-44a9-96f1-caf0cd2b4c95\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: E0218 00:10:16.810406 5121 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812240 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812390 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812573 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.812724 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.826100 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.835498 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.853964 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.864772 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.879338 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.892947 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.904704 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.914335 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915628 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915721 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915768 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.915787 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:16Z","lastTransitionTime":"2026-02-18T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.926172 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.938538 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.959039 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.978725 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:16 crc kubenswrapper[5121]: I0218 00:10:16.993144 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.008859 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018900 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018955 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.018993 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.019006 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.024894 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.037961 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.051545 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.066822 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.094397 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.109469 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.121750 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122572 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122627 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122655 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.122680 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.127225 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.140950 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.159244 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.170238 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.180406 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.188080 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.203170 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.216985 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224442 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224485 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224494 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224510 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.224522 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.231652 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.241869 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.254110 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.264381 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276142 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:17 crc kubenswrapper[5121]: E0218 00:10:17.276346 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276711 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:17 crc kubenswrapper[5121]: E0218 00:10:17.276917 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.276966 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.296241 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.305898 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.324151 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326159 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326245 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326268 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326292 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.326311 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.340869 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.352723 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.365727 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.379828 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.389906 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.400983 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.410776 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.422916 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428932 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428977 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.428987 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.429002 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.429014 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.434194 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.447286 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.470399 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.488148 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.501821 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.512482 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.530718 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532538 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532617 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.532632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.533165 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.533192 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.541112 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686298 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686395 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686418 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.686432 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.716055 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5" exitCode=0 Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.716129 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721300 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721363 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721374 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721384 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.721394 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.723295 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tqxjt" event={"ID":"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8","Type":"ContainerStarted","Data":"84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.730252 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.742609 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.757421 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.769533 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.782150 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789734 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789802 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789821 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.789870 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.791442 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.804101 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.816410 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.830632 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.843523 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.856285 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.872571 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.889001 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893407 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893461 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893475 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893496 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.893509 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.910193 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.926883 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.941494 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.953533 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.975302 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.987249 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995928 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995948 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:17 crc kubenswrapper[5121]: I0218 00:10:17.995962 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:17Z","lastTransitionTime":"2026-02-18T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.001060 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.017188 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.028079 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.039706 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.054148 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.073191 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.086170 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099126 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099181 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099192 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099210 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099228 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.099946 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.108972 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.160323 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.195852 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201658 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201680 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.201706 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.231286 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.270430 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.270509 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:18 crc kubenswrapper[5121]: E0218 00:10:18.270672 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:18 crc kubenswrapper[5121]: E0218 00:10:18.270803 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.272467 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304404 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304468 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304481 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304500 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.304512 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.311576 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.363029 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.393885 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408446 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408499 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408537 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.408555 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.432547 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.473389 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.511641 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514478 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514527 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514542 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514565 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.514579 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619438 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619864 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619879 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619898 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.619910 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722221 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722611 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722621 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722636 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.722649 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.732711 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.735288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.749897 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.769603 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.781339 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.794446 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.802291 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.816439 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825739 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825791 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825810 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825834 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.825852 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.826047 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.836250 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.872307 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.914496 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927876 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927925 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.927987 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:18Z","lastTransitionTime":"2026-02-18T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.953727 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:18 crc kubenswrapper[5121]: I0218 00:10:18.991766 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031739 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031823 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031851 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.031870 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.033448 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.074234 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.110091 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134175 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134247 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134271 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134297 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.134317 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.156438 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.196637 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.235729 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.236930 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237029 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237049 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237075 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.237162 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.270945 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.270948 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:19 crc kubenswrapper[5121]: E0218 00:10:19.271250 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:19 crc kubenswrapper[5121]: E0218 00:10:19.271456 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.276551 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339571 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339621 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339632 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339652 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.339680 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.442531 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443068 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443240 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443416 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.443621 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547137 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547204 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547222 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547249 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.547269 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649840 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649921 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649974 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.649993 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.745213 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.748272 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676" exitCode=0 Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.748391 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.753529 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-vsc9f" event={"ID":"9afb2de0-1fd9-4548-b02d-ba81525f51c8","Type":"ContainerStarted","Data":"e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754936 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754968 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754978 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754989 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.754999 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.762854 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.779845 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.790629 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.817198 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.831981 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.847252 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.858665 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859351 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859397 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859436 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.859450 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.872453 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.885357 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.898028 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.907201 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.917046 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.926988 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.938512 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.955380 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967842 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.967852 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:19Z","lastTransitionTime":"2026-02-18T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.978429 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:19 crc kubenswrapper[5121]: I0218 00:10:19.989531 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013754 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013853 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013929 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.013966 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014056 5121 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014118 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014099248 +0000 UTC m=+119.528556993 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014456 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014444917 +0000 UTC m=+119.528902672 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014559 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014574 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014586 5121 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014616 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014607681 +0000 UTC m=+119.529065426 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014689 5121 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014715 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014708183 +0000 UTC m=+119.529165928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014764 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014775 5121 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014783 5121 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.014808 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.014800526 +0000 UTC m=+119.529258271 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.040683 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071213 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071333 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071375 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.071390 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.072997 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.115627 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.115922 5121 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.116164 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs podName:5b49811f-e44a-43e9-80e6-15fcc9ed145f nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.116138968 +0000 UTC m=+119.630596693 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs") pod "network-metrics-daemon-mlvtl" (UID: "5b49811f-e44a-43e9-80e6-15fcc9ed145f") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.117115 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.153853 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174029 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174086 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.174115 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.195276 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.235377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.269842 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.270019 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.270191 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.270324 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.271545 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:20 crc kubenswrapper[5121]: E0218 00:10:20.271882 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276641 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276767 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276787 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276834 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.276859 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.290381 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.316392 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.358377 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379530 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379552 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379574 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.379591 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.392996 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.433774 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.473872 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481970 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.481983 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.482005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.482018 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.512094 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.553287 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.584713 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585110 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585208 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585321 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.585436 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.592395 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.632541 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.679155 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.687949 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688008 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688022 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688046 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.688061 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.716030 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.751453 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.760673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"84ca5e1e4b35397de8f78366548363a661feb4d56e2620632adfb38fece38466"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.760848 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"d264212f574ac694a4d2414e785c3d7f289fd6e5e6b18def1902e17badf38968"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.762614 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993" exitCode=0 Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.762689 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.765509 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerStarted","Data":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.770172 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790020 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790100 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790116 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790144 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.790161 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.792180 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.833190 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa248b01-70eb-4e3f-8e58-80caf7bd2261\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://76089c97509d5a244aeca990931d31b8fcccd44fe35da02e04fbd152c3d896df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://534f3aefb1393bc8ae49ec9275b112466b4edc4693f06acfb9de7b84a456d5b5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://98aec2fc6e0751df5f38f34980f710a820564f0b0da342b8f9dd772891c25a5e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.881947 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"980cbb7d-2b54-4888-aaf4-1ba599869bac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://55e2bb101421653276cb48b70e8eaf27342ed1e8ce6b8a5b8411878d8fa1a88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5e55154acd14118fa43687aea91f10555e844abea6f7909366fdc5959f9ec4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f67a9aaea93ff9e7d66d6d75bcdc7be7c940454d02ff6902da0b32cc148f9be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://394874d6ff9b824a35c878026fc3fa81836a02a609d14e4c22cfe769b350a7bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://27ee874d1ac35d2c7cfa8ac4dc70fe59071236712d8e435686f830ee33511a4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d87862b8ab4ecbb1b5ccb1233c70ecf68f84a3d9945e250331c1effa0860adf1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://34090f91db97d5d1e2f33bb05fd741e1bff5e59e0862c9a3a237f8944079770b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d5c2e1a822bc2be327c464e8122d6ec7440e1d9c88ad3aa4e83aa75ff6b73899\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892003 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892053 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892066 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.892097 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.913555 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"557bb62e-e0a8-4dc6-9693-f1480c510930\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-18T00:09:54Z\\\",\\\"message\\\":\\\"vvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nW0218 00:09:54.016908 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0218 00:09:54.017134 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0218 00:09:54.018375 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1381600889/tls.crt::/tmp/serving-cert-1381600889/tls.key\\\\\\\"\\\\nI0218 00:09:54.582556 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0218 00:09:54.585352 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0218 00:09:54.585372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0218 00:09:54.585396 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0218 00:09:54.585408 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0218 00:09:54.590578 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0218 00:09:54.590598 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0218 00:09:54.590643 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590695 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0218 00:09:54.590704 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0218 00:09:54.590712 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0218 00:09:54.590718 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0218 00:09:54.590725 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0218 00:09:54.594529 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-18T00:09:53Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.954269 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ca5e1e4b35397de8f78366548363a661feb4d56e2620632adfb38fece38466\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d264212f574ac694a4d2414e785c3d7f289fd6e5e6b18def1902e17badf38968\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994235 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-vsc9f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9afb2de0-1fd9-4548-b02d-ba81525f51c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e5cc3e9aeadca22e5dc4792e3db2c4fdc6c8481677cbd38d1a08b98cef00504c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lx5wk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-vsc9f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994877 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994888 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994902 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:20 crc kubenswrapper[5121]: I0218 00:10:20.994913 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:20Z","lastTransitionTime":"2026-02-18T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.044433 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0ec6f87b-86e0-4893-9709-9dc7381bc95a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xfl5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7tprw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.073257 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9cd074-60f6-4754-9ef8-567f9274e384\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"},\\\"containerID\\\":\\\"cri-o://74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"20Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"},\\\"containerID\\\":\\\"cri-o://07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"300Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rmw8r\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-rfj5g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096894 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096959 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.096968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.111582 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.152722 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-tqxjt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b47fedd5-33a0-43c1-9e5d-c31c88d07fb8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"},\\\"containerID\\\":\\\"cri-o://84ed63585a6b16150972599af8b6e27866ac88b9e355fbf12d2bf57b831e570d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"21Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:17Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q8wqj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tqxjt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.194510 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5bc15fae-a0c0-4032-b673-383e603fe393\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1ff0522bc4101ec8fd1af6b3747042f0831114ca12aab749ea912095f0346b5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a5164f9a084534915d3f2b4170959fcbe4745323a1a562ec10c351859b5e676\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:18Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d0a302449968b1e7fb05aa234cd4933523c15aac9a6d30397a4e37c97ed0993\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:10:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:10:20Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-plr9k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-n2m5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198717 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198760 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198772 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198797 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.198814 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.233899 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.271080 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.278247 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.278284 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:21 crc kubenswrapper[5121]: E0218 00:10:21.278444 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:21 crc kubenswrapper[5121]: E0218 00:10:21.278590 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.300954 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.301196 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.301453 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.302129 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.302224 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.315190 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5b49811f-e44a-43e9-80e6-15fcc9ed145f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-swdmp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mlvtl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.352782 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d25dd473-4453-4646-8742-7f00c35e4170\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:09:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e58bfdbd6a7b7f0ade4a2068db44034888c49a6bd3ad2d05922a651106b1035d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://fe08e9e6cf118c67be34c66cd605b7821bc7190bd835a3a5a604f993e4dce90c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://95c3eb236e60016f1c697fa76ba7ef861c66ae5b50ec0dff3fd325155cd739ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee1ef198e2c15be1871df7cedc831664e39348830ec63b1f635f783a4f4e6aaf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.393257 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ca23026-5694-4d75-b0c1-7f88599bc8e2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:08:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d2281e89f2ecd936d40c5e2676626f376f52e1fd7a5e42e27adffd7cdbfa56b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:08:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ce4b61509c01dde990964290db1dadc53654e18cfad5a42b4cd5f638ea1ee6f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-18T00:08:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-18T00:08:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:08:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405766 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405838 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405859 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405886 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.405909 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.438796 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:19Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c2cee2a68f8db45da6bb1dfd94e0ab9c27519fa137d703a875a33beaa45d12c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:19Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.474287 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.508965 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509038 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509065 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509099 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.509125 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.524396 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ce10664c-304a-460f-819a-bf71f3517fb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a00f298fe05cbdcf19e0793e479a856bf1b24e79d64a4c5eba76b79b2814b8e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:16Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6z5xr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ss65g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.595566 5121 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-9dxsb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"51dcc4ed-63a2-4a92-936e-8ef22eca20d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-18T00:10:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-18T00:10:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6psrx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-18T00:10:04Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9dxsb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611189 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611235 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611247 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611264 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.611274 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713266 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713341 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713363 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.713376 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.781239 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="9eb9b4520abbd7e9304ca9519934fdcaf9dd7220dde2d520336c2cd5252af409" exitCode=0 Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.781343 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"9eb9b4520abbd7e9304ca9519934fdcaf9dd7220dde2d520336c2cd5252af409"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815612 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815733 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815757 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815786 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.815810 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.882949 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tqxjt" podStartSLOduration=83.882927394 podStartE2EDuration="1m23.882927394s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.849772731 +0000 UTC m=+105.364230546" watchObservedRunningTime="2026-02-18 00:10:21.882927394 +0000 UTC m=+105.397385129" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918473 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918567 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.918625 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:21Z","lastTransitionTime":"2026-02-18T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.959758 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=17.959738393 podStartE2EDuration="17.959738393s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.959398414 +0000 UTC m=+105.473856169" watchObservedRunningTime="2026-02-18 00:10:21.959738393 +0000 UTC m=+105.474196128" Feb 18 00:10:21 crc kubenswrapper[5121]: I0218 00:10:21.973972 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.973937582 podStartE2EDuration="18.973937582s" podCreationTimestamp="2026-02-18 00:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:21.973142371 +0000 UTC m=+105.487600196" watchObservedRunningTime="2026-02-18 00:10:21.973937582 +0000 UTC m=+105.488395357" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021237 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021275 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.021290 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.025761 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podStartSLOduration=84.025744753 podStartE2EDuration="1m24.025744753s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.025156168 +0000 UTC m=+105.539613983" watchObservedRunningTime="2026-02-18 00:10:22.025744753 +0000 UTC m=+105.540202489" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.080773 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9dxsb" podStartSLOduration=84.080738471 podStartE2EDuration="1m24.080738471s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.048435509 +0000 UTC m=+105.562893324" watchObservedRunningTime="2026-02-18 00:10:22.080738471 +0000 UTC m=+105.595196276" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.081163 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=18.081154642 podStartE2EDuration="18.081154642s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.079454147 +0000 UTC m=+105.593911902" watchObservedRunningTime="2026-02-18 00:10:22.081154642 +0000 UTC m=+105.595612407" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124498 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124539 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124551 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124569 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.124582 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.129002 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=19.127911798 podStartE2EDuration="19.127911798s" podCreationTimestamp="2026-02-18 00:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.12496584 +0000 UTC m=+105.639423615" watchObservedRunningTime="2026-02-18 00:10:22.127911798 +0000 UTC m=+105.642369583" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227024 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227128 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227153 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.227168 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.235247 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-vsc9f" podStartSLOduration=84.235220481 podStartE2EDuration="1m24.235220481s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.235019346 +0000 UTC m=+105.749477091" watchObservedRunningTime="2026-02-18 00:10:22.235220481 +0000 UTC m=+105.749678206" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.269720 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.269755 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:22 crc kubenswrapper[5121]: E0218 00:10:22.269931 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:22 crc kubenswrapper[5121]: E0218 00:10:22.270051 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.315029 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podStartSLOduration=83.31500638 podStartE2EDuration="1m23.31500638s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.31464049 +0000 UTC m=+105.829098235" watchObservedRunningTime="2026-02-18 00:10:22.31500638 +0000 UTC m=+105.829464105" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329460 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329517 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329531 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329547 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.329558 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432138 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432230 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432254 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.432270 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535014 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535084 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535105 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535131 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.535150 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.636998 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637047 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637059 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.637091 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739007 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739077 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739115 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739134 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.739145 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.787799 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.791646 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerStarted","Data":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.791984 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.792028 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.792038 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.823823 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.832091 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841249 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841286 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841299 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841348 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.841363 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.844984 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podStartSLOduration=84.844970854 podStartE2EDuration="1m24.844970854s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:22.843120165 +0000 UTC m=+106.357577920" watchObservedRunningTime="2026-02-18 00:10:22.844970854 +0000 UTC m=+106.359428589" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943883 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943939 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943951 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:22 crc kubenswrapper[5121]: I0218 00:10:22.943986 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:22Z","lastTransitionTime":"2026-02-18T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046598 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046692 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046708 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046729 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.046745 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149378 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149448 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149466 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149493 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.149511 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252713 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252798 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252824 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.252842 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.270585 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:23 crc kubenswrapper[5121]: E0218 00:10:23.270802 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.270840 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:23 crc kubenswrapper[5121]: E0218 00:10:23.271066 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355401 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355476 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355501 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.355518 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458594 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458695 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458738 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.458757 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561354 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561418 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561437 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561464 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.561489 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663707 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663778 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663804 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663839 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.663864 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766397 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766477 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766502 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766529 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.766551 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.803004 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c6e1cf4b8e8f8bf8edaa911bf15ccc6c1afae31bcf0a3c9aced7057707efb155"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.808999 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0" exitCode=0 Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.809075 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"e03bdadfffa5cfdd910932db26b739a5197e0563f32039e91fa14e6a1031c3f0"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.869518 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870062 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870412 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.870734 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.871012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974166 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974215 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974225 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974241 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:23 crc kubenswrapper[5121]: I0218 00:10:23.974253 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:23Z","lastTransitionTime":"2026-02-18T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079274 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079345 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079364 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079396 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.079417 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181921 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181975 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.181987 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.182005 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.182021 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.269854 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:24 crc kubenswrapper[5121]: E0218 00:10:24.270134 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.269886 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:24 crc kubenswrapper[5121]: E0218 00:10:24.270609 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285885 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285917 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285928 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285944 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.285956 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390121 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390174 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390183 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390202 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.390212 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.492997 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493072 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493091 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493119 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.493138 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596529 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596599 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596629 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596685 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.596704 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699331 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699379 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699393 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699411 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.699424 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802324 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802383 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802399 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802420 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.802437 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.818992 5121 generic.go:358] "Generic (PLEG): container finished" podID="5bc15fae-a0c0-4032-b673-383e603fe393" containerID="81e434867b21e9bbfc675f454a70822b0a690cbb63fce7c952838ef2ad557b31" exitCode=0 Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.820899 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerDied","Data":"81e434867b21e9bbfc675f454a70822b0a690cbb63fce7c952838ef2ad557b31"} Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905006 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905065 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905085 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905106 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:24 crc kubenswrapper[5121]: I0218 00:10:24.905121 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:24Z","lastTransitionTime":"2026-02-18T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008406 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008459 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008473 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008492 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.008540 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.112912 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113251 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113263 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113281 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.113295 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.126582 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlvtl"] Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.126757 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.126868 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215896 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215979 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.215998 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.216027 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.216046 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.269942 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.270117 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.270597 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:25 crc kubenswrapper[5121]: E0218 00:10:25.270736 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318211 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318299 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318323 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318352 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.318374 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.420715 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421847 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421895 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421922 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.421941 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524616 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524635 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524728 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.524749 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627893 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627952 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627972 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.627995 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.628012 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730534 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730596 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730615 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730641 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.730694 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832558 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832630 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832681 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832709 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.832728 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.834933 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" event={"ID":"5bc15fae-a0c0-4032-b673-383e603fe393","Type":"ContainerStarted","Data":"755997e9b414036d2bacb2870115aa879b252238b47a9af329648aa8e97f12fb"} Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936019 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936079 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936094 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936113 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:25 crc kubenswrapper[5121]: I0218 00:10:25.936125 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:25Z","lastTransitionTime":"2026-02-18T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038832 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038897 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038911 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038935 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.038954 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.141812 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142852 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142910 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142943 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.142968 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245609 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245742 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245769 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245820 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.245839 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.270010 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.270081 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:26 crc kubenswrapper[5121]: E0218 00:10:26.270232 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:26 crc kubenswrapper[5121]: E0218 00:10:26.270438 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.348916 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.348994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349015 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349043 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.349062 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451408 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451478 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451497 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451522 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.451540 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554044 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554109 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554129 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554157 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.554175 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656522 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656622 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656684 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656714 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.656736 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760103 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760168 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760182 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760202 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.760216 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836312 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836371 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836389 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836413 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.836431 5121 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-18T00:10:26Z","lastTransitionTime":"2026-02-18T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.904094 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-n2m5r" podStartSLOduration=88.904058912 podStartE2EDuration="1m28.904058912s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:25.875257281 +0000 UTC m=+109.389715096" watchObservedRunningTime="2026-02-18 00:10:26.904058912 +0000 UTC m=+110.418516687" Feb 18 00:10:26 crc kubenswrapper[5121]: I0218 00:10:26.905351 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw"] Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.237862 5121 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.251261 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336820 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336868 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.336872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: E0218 00:10:27.337004 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 18 00:10:27 crc kubenswrapper[5121]: E0218 00:10:27.337390 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.339867 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.340195 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.342999 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.346259 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424265 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424599 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424802 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.424930 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.425175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526477 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526567 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526609 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526633 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526767 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.526863 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/301c7ba1-7668-44c5-bae1-acad05f92eb5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.528026 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/301c7ba1-7668-44c5-bae1-acad05f92eb5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.534745 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/301c7ba1-7668-44c5-bae1-acad05f92eb5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.557610 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/301c7ba1-7668-44c5-bae1-acad05f92eb5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vtbgw\" (UID: \"301c7ba1-7668-44c5-bae1-acad05f92eb5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.663149 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" Feb 18 00:10:27 crc kubenswrapper[5121]: W0218 00:10:27.688320 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301c7ba1_7668_44c5_bae1_acad05f92eb5.slice/crio-3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7 WatchSource:0}: Error finding container 3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7: Status 404 returned error can't find the container with id 3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7 Feb 18 00:10:27 crc kubenswrapper[5121]: I0218 00:10:27.844339 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" event={"ID":"301c7ba1-7668-44c5-bae1-acad05f92eb5","Type":"ContainerStarted","Data":"3b995e6103224e0b7c6fa514233130eefc49869bcf9f9fb1c830b906deb83fd7"} Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.269718 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.269754 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:28 crc kubenswrapper[5121]: E0218 00:10:28.269963 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mlvtl" podUID="5b49811f-e44a-43e9-80e6-15fcc9ed145f" Feb 18 00:10:28 crc kubenswrapper[5121]: E0218 00:10:28.270085 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.850041 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" event={"ID":"301c7ba1-7668-44c5-bae1-acad05f92eb5","Type":"ContainerStarted","Data":"5636d14b2bde59863ee496176825e03ce7b2920be8b499564a495e5f220d686f"} Feb 18 00:10:28 crc kubenswrapper[5121]: I0218 00:10:28.875784 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vtbgw" podStartSLOduration=90.875754103 podStartE2EDuration="1m30.875754103s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:28.875122576 +0000 UTC m=+112.389580351" watchObservedRunningTime="2026-02-18 00:10:28.875754103 +0000 UTC m=+112.390211928" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.132994 5121 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.133281 5121 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.190044 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.194025 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202226 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202228 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.202444 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.203727 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.208673 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.208903 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.213058 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.215263 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.215679 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.217381 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.221359 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.221575 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226456 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226499 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.226737 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227733 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227738 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227897 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.227964 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228196 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228390 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.228845 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229078 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229097 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229176 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.229773 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.231506 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.234504 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.234823 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240203 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240392 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240563 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240711 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240813 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.240968 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.241038 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.241152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.243478 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247192 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247408 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.247438 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.250669 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.253603 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.254174 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.269767 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.269925 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270009 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270025 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270117 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270152 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270257 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270351 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270432 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.270569 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.284348 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.285305 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.286875 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287592 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287609 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.287940 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.288095 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.288208 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.289296 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.290575 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.296514 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.306424 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.306663 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.308810 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.308939 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.309770 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.310165 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.310238 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.312871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.314706 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315296 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315380 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.315468 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.316060 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.316702 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317414 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317795 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.317987 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.318562 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.319887 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320291 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320469 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320681 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.320883 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.321486 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.323895 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.325298 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352855 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352912 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352951 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352971 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.352987 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353004 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353019 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353036 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353053 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353071 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353086 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353102 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353118 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353134 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353150 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353166 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353182 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353197 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353218 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353240 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353272 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353288 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353311 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353341 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353363 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353395 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353411 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353426 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353440 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353463 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353481 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353496 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353521 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353548 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353564 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353579 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353596 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353611 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353627 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353644 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353678 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353695 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353710 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353799 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353895 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353936 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.353980 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354033 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354097 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354121 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354159 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354198 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354220 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.354248 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.418776 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429213 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429354 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.429740 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.431694 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.433314 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.433639 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435257 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435400 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435485 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435545 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435629 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435700 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.435556 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.436875 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.437083 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.439908 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.440077 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.443007 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445272 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445957 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.445983 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446160 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446342 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.446786 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448267 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448363 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.448577 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.449004 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.451707 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.451928 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.457818 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.459097 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.460415 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.460615 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461153 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461474 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.461758 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.462255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.464291 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465571 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465619 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465662 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465765 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465800 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465879 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465931 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465960 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.465982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466005 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466056 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466118 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466148 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466222 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466211 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-node-pullsecrets\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466253 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466404 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.466534 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467453 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467535 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.467588 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.468584 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.470212 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.470912 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-config\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.471803 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472203 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472344 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472712 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.472892 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.473546 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/62cfebd6-02c7-4437-9be3-60aec3d91f1b-tmp-dir\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.473587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-trusted-ca-bundle\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474174 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474449 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-encryption-config\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.474522 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475023 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-encryption-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475076 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475248 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475362 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475397 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475465 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475505 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475525 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.476041 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-serving-cert\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.476353 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.478622 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.475566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.478737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479000 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479044 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479076 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479467 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479515 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479541 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479585 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479626 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479662 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479681 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479709 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479748 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479767 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479882 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479924 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.479989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480014 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480037 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480055 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480073 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480195 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-serving-ca\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480090 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.480770 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482208 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-service-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482232 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482528 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.482832 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18ee9403-18d0-4528-a3cd-82ea0dba3576-config\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483147 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483699 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483705 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483703 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-ca\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.483762 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-dir\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484052 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484101 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-audit-dir\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484246 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e173473c-5d44-44cf-833c-2a88d061dd9f-audit-policies\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484519 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-images\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/005aa352-e543-4bfd-ba57-b2cb37eb98f6-config\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.484807 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-config\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485101 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.485414 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486194 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-serving-cert\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-image-import-ca\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486568 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-etcd-client\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.486998 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/62cfebd6-02c7-4437-9be3-60aec3d91f1b-etcd-client\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487795 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/005aa352-e543-4bfd-ba57-b2cb37eb98f6-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.487804 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.488878 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.489965 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.490744 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.491309 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-serving-cert\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.492928 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e173473c-5d44-44cf-833c-2a88d061dd9f-etcd-client\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.494542 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18ee9403-18d0-4528-a3cd-82ea0dba3576-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.495268 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.495400 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.496697 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.500591 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.500779 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.502062 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.505762 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.505917 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.510508 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.510621 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.513012 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.513095 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.515297 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.515475 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.516313 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.518761 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.518879 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.520942 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.520964 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.521072 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.524008 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.524111 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.526129 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.528349 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.528399 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.529995 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.536956 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.539842 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.539947 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.543017 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.543441 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.545775 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.546225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549086 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549118 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.549361 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.552567 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.552745 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.556469 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.558393 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.558521 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561368 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561390 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.561525 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.563529 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-mvs4c"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.563892 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.566567 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.566715 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569323 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569344 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569354 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569363 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569371 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569380 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569389 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569397 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569406 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569418 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569429 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569442 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569460 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569468 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569477 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569495 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.569498 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vn45p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.571914 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.572048 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.576945 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577685 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577736 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.577756 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.581119 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.581228 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583470 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583480 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583489 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583501 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583521 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583533 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583581 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583592 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583603 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583611 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583622 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583630 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583646 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583669 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583696 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583707 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583716 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583729 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.583743 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586189 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586381 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.585513 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586666 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586687 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586710 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586745 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586769 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586863 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586925 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586943 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586978 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.586997 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587012 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587027 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587044 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587058 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587122 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587270 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587289 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587320 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587336 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587388 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587428 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587509 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587540 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587558 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587586 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587602 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587670 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587687 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587717 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587732 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587759 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.587889 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.596138 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.616848 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.636133 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.674596 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgpvg\" (UniqueName: \"kubernetes.io/projected/e173473c-5d44-44cf-833c-2a88d061dd9f-kube-api-access-bgpvg\") pod \"apiserver-8596bd845d-jrx99\" (UID: \"e173473c-5d44-44cf-833c-2a88d061dd9f\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689356 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689635 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689828 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.689970 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690135 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690287 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690577 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690723 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.690872 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/49d45bda-ec47-407b-b527-c7267c3825c0-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691154 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691289 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691413 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691526 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691625 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691743 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.691866 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692071 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692218 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692403 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692635 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692753 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692852 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693116 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-config\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.692961 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693425 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693550 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693695 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694253 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694526 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.693877 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/9c0d1702-8700-443c-9bf2-afa4222bd41c-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.694773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695201 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695349 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695439 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695543 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695688 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.695981 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696230 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696504 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696585 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696668 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696703 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696766 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696919 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.696958 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697023 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697081 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697108 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697170 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697200 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697263 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697286 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697330 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697356 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697401 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697422 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697449 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697498 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697522 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697563 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697611 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697680 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697705 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697793 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697812 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697930 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.697962 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.698361 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69df6480-3d02-4112-b8db-3507dd5a5f49-tmp-dir\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.698973 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699124 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699234 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699397 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699475 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699584 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699195 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e287aff-1485-4233-8648-ece2622ccf37-tmp-dir\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699695 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699788 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.699820 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700197 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700225 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-available-featuregates\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700282 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700304 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700348 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700297 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdb0e57-487f-44df-bfea-01e173ebb1e3-trusted-ca\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700383 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700449 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700585 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/0720e131-2f16-4741-bef5-fa81e51085a8-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700690 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700716 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700746 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700784 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700802 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700822 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700917 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.700986 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701011 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701033 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701057 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701082 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701119 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701162 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701188 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701247 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701271 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701292 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701683 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e287aff-1485-4233-8648-ece2622ccf37-metrics-tls\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701725 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.701886 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/49d45bda-ec47-407b-b527-c7267c3825c0-tmp\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702271 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69df6480-3d02-4112-b8db-3507dd5a5f49-serving-cert\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702304 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69df6480-3d02-4112-b8db-3507dd5a5f49-config\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702406 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/49d45bda-ec47-407b-b527-c7267c3825c0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702525 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-serving-cert\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.702600 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.703935 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9c0d1702-8700-443c-9bf2-afa4222bd41c-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.705170 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbdb0e57-487f-44df-bfea-01e173ebb1e3-serving-cert\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.705878 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3597721-7184-4c2a-8050-ccec6fa345e4-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.711453 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9c0d1702-8700-443c-9bf2-afa4222bd41c-config\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.714565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9s2bk\" (UniqueName: \"kubernetes.io/projected/18ee9403-18d0-4528-a3cd-82ea0dba3576-kube-api-access-9s2bk\") pod \"openshift-apiserver-operator-846cbfc458-j5zbs\" (UID: \"18ee9403-18d0-4528-a3cd-82ea0dba3576\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.732142 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfxqx\" (UniqueName: \"kubernetes.io/projected/62cfebd6-02c7-4437-9be3-60aec3d91f1b-kube-api-access-nfxqx\") pod \"etcd-operator-69b85846b6-trwcb\" (UID: \"62cfebd6-02c7-4437-9be3-60aec3d91f1b\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.736431 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.741353 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-config\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.756523 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.759516 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.777352 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.803812 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804283 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804519 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805001 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805176 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805134 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-csi-data-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804958 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-socket-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.804245 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-plugins-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805686 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805847 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.805989 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806118 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806327 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.806570 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-tmpfs\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807177 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/21a8987a-ee46-4b59-b949-55032c182585-tmpfs\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807246 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807620 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807818 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.807943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808093 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808333 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808347 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.808759 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809048 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809261 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809416 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809631 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809828 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.809953 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810066 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-mountpoint-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810069 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810151 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810172 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810199 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810227 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810260 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810292 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810379 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810410 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810434 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810472 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810498 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810516 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810547 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810567 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810587 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810612 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810629 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810685 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810760 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810781 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810808 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810835 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810864 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810949 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810971 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.810994 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811022 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811044 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811070 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811112 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811167 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811191 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811216 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811264 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811286 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811309 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811330 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811356 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811515 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a41b6648-bba2-4f34-b49b-f95db5ff9426-registration-dir\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811600 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.811974 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.812469 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813001 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b46e61bd-a38a-4792-98ee-067e427538c9-tmp-dir\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813148 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813173 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813201 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813221 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813240 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813259 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.813440 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1c0a3ab2-4ddb-4472-af47-3471a18714be-tmpfs\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.814241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4cab190f-d97b-45f5-8875-eb96fc357e91-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.815216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"oauth-openshift-66458b6674-m7q6l\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.815287 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.819471 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-profile-collector-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.836020 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ff9m\" (UniqueName: \"kubernetes.io/projected/005aa352-e543-4bfd-ba57-b2cb37eb98f6-kube-api-access-5ff9m\") pod \"machine-api-operator-755bb95488-hfw2k\" (UID: \"005aa352-e543-4bfd-ba57-b2cb37eb98f6\") " pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.852200 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.852349 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"route-controller-manager-776cdc94d6-w48qb\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.873244 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.876855 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cml8m\" (UniqueName: \"kubernetes.io/projected/4fa50e1e-3367-4e1b-93fb-aea8f3220c81-kube-api-access-cml8m\") pod \"apiserver-9ddfb9f55-422hn\" (UID: \"4fa50e1e-3367-4e1b-93fb-aea8f3220c81\") " pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.904535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"image-pruner-29522880-hmpf4\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.911332 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"controller-manager-65b6cccf98-x8c88\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.915192 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.925923 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.933498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.938021 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.938109 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.946872 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.951123 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-serving-cert\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.959062 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.968544 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:29 crc kubenswrapper[5121]: I0218 00:10:29.977014 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:29.998106 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.008140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0720e131-2f16-4741-bef5-fa81e51085a8-config\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.017716 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.038908 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.050140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0720e131-2f16-4741-bef5-fa81e51085a8-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.059295 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.078285 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.098488 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.108767 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.114567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.114910 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.116371 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.142314 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.144304 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18ee9403_18d0_4528_a3cd_82ea0dba3576.slice/crio-9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14 WatchSource:0}: Error finding container 9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14: Status 404 returned error can't find the container with id 9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.149045 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.163225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.173364 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.188108 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.188215 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aa83ca9d-be38-4710-ace7-571b9e8b43dc-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.193473 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aa83ca9d-be38-4710-ace7-571b9e8b43dc-config\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.196898 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.219405 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.228706 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-apiservice-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.230133 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1c0a3ab2-4ddb-4472-af47-3471a18714be-webhook-cert\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.240930 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.254593 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-jrx99"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.257098 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.266743 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-trwcb"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.268162 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/efe976a0-6ea6-4283-8b7c-97caa4f2111b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.269989 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.270255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.280013 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.298227 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.322093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.331035 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-hfw2k"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.336511 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.343030 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.344712 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005aa352_e543_4bfd_ba57_b2cb37eb98f6.slice/crio-f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6 WatchSource:0}: Error finding container f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6: Status 404 returned error can't find the container with id f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.350322 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29522880-hmpf4"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.362200 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.373125 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.377073 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.387731 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.396825 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.409746 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.420748 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.425592 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec21d65e_1eab_42a8_bb64_e6f9ba7b5c69.slice/crio-b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed WatchSource:0}: Error finding container b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed: Status 404 returned error can't find the container with id b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.429454 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc530ba0_1249_4787_8584_22f866581116.slice/crio-8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8 WatchSource:0}: Error finding container 8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8: Status 404 returned error can't find the container with id 8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.438236 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.440668 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-422hn"] Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.458871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: W0218 00:10:30.468732 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fa50e1e_3367_4e1b_93fb_aea8f3220c81.slice/crio-b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8 WatchSource:0}: Error finding container b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8: Status 404 returned error can't find the container with id b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.482536 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.498054 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.509498 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.516704 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.533438 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-oauth-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.539057 5121 request.go:752] "Waited before sending request" delay="1.017277874s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.547986 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.555711 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.562755 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-console-config\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.580876 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.587190 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-service-ca\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.606597 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.615264 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-trusted-ca-bundle\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.623985 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.634553 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-oauth-serving-cert\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.641156 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.660375 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.672525 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4cab190f-d97b-45f5-8875-eb96fc357e91-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.678449 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.696740 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.719454 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.737317 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.758661 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.774247 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.776208 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.786388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-images\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.796297 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804574 5121 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804700 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca podName:cad52ef7-8080-48a2-91e3-5bcfc007b196 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.304676382 +0000 UTC m=+114.819134117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-78c6t" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804779 5121 secret.go:189] Couldn't get secret openshift-ingress/router-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.804884 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.304861547 +0000 UTC m=+114.819319282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-certificate" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806831 5121 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806872 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert podName:1c378e40-50b9-49d3-bbdf-f9cc1e6baaac nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.306863821 +0000 UTC m=+114.821321546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert") pod "ingress-canary-h64q4" (UID: "1c378e40-50b9-49d3-bbdf-f9cc1e6baaac") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.806945 5121 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.807123 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.307093677 +0000 UTC m=+114.821551412 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.807981 5121 secret.go:189] Couldn't get secret openshift-ingress/router-stats-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808019 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.308011952 +0000 UTC m=+114.822469677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "stats-auth" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808055 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.808089 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert podName:4b1e56fa-e38b-48bc-9768-0bc82aca0a0c nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.308080824 +0000 UTC m=+114.822538559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-zsz4p" (UID: "4b1e56fa-e38b-48bc-9768-0bc82aca0a0c") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809941 5121 secret.go:189] Couldn't get secret openshift-ingress/router-metrics-certs-default: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809972 5121 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.809991 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.309982174 +0000 UTC m=+114.824439909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.810048 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics podName:cad52ef7-8080-48a2-91e3-5bcfc007b196 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.310030265 +0000 UTC m=+114.824488000 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics") pod "marketplace-operator-547dbd544d-78c6t" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812449 5121 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config podName:0d3e4d34-c74d-4572-aca8-da4c6c85fa79 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312483011 +0000 UTC m=+114.826940746 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config") pod "openshift-kube-scheduler-operator-54f497555d-km69x" (UID: "0d3e4d34-c74d-4572-aca8-da4c6c85fa79") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812524 5121 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812570 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls podName:b46e61bd-a38a-4792-98ee-067e427538c9 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312560913 +0000 UTC m=+114.827018648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls") pod "dns-default-rsbpp" (UID: "b46e61bd-a38a-4792-98ee-067e427538c9") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812597 5121 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.812618 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume podName:b46e61bd-a38a-4792-98ee-067e427538c9 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.312611924 +0000 UTC m=+114.827069659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume") pod "dns-default-rsbpp" (UID: "b46e61bd-a38a-4792-98ee-067e427538c9") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813698 5121 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813736 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle podName:db5b1911-47a0-41f1-b793-924df4056e20 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313729064 +0000 UTC m=+114.828186799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle") pod "service-ca-74545575db-vlht9" (UID: "db5b1911-47a0-41f1-b793-924df4056e20") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813755 5121 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813774 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key podName:db5b1911-47a0-41f1-b793-924df4056e20 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313769666 +0000 UTC m=+114.828227401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key") pod "service-ca-74545575db-vlht9" (UID: "db5b1911-47a0-41f1-b793-924df4056e20") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813787 5121 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813806 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist podName:9b4e56ad-da89-4541-842d-17ba2d9bcb0a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313801436 +0000 UTC m=+114.828259171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-jc5sl" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813837 5121 configmap.go:193] Couldn't get configMap openshift-ingress/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813856 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle podName:8724461b-b94b-4f4a-9c9f-4a131b9e02c2 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313851628 +0000 UTC m=+114.828309363 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle") pod "router-default-68cf44c8b8-mvs4c" (UID: "8724461b-b94b-4f4a-9c9f-4a131b9e02c2") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813870 5121 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813890 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert podName:0d3e4d34-c74d-4572-aca8-da4c6c85fa79 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313884188 +0000 UTC m=+114.828341923 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert") pod "openshift-kube-scheduler-operator-54f497555d-km69x" (UID: "0d3e4d34-c74d-4572-aca8-da4c6c85fa79") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813905 5121 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813933 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313920959 +0000 UTC m=+114.828378694 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813947 5121 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813966 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config podName:4ead99f6-fe0b-418e-b25c-06d177458b2a nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.31396142 +0000 UTC m=+114.828419155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config") pod "machine-approver-54c688565-jxkj2" (UID: "4ead99f6-fe0b-418e-b25c-06d177458b2a") : failed to sync configmap cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.813979 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814003 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert podName:a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.313997001 +0000 UTC m=+114.828454726 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert") pod "catalog-operator-75ff9f647d-wwrwg" (UID: "a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814489 5121 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: E0218 00:10:30.814528 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert podName:21a8987a-ee46-4b59-b949-55032c182585 nodeName:}" failed. No retries permitted until 2026-02-18 00:10:31.314521235 +0000 UTC m=+114.828978970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert") pod "olm-operator-5cdf44d969-htdrd" (UID: "21a8987a-ee46-4b59-b949-55032c182585") : failed to sync secret cache: timed out waiting for the condition Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.815931 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.838326 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.856554 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860042 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerStarted","Data":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860096 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerStarted","Data":"b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.860925 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863087 5121 generic.go:358] "Generic (PLEG): container finished" podID="e173473c-5d44-44cf-833c-2a88d061dd9f" containerID="ad8a18db20601067d45dfc8d825312457a4a229c9120ff9c3d1fce49c153e941" exitCode=0 Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerDied","Data":"ad8a18db20601067d45dfc8d825312457a4a229c9120ff9c3d1fce49c153e941"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863177 5121 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-x8c88 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863203 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerStarted","Data":"479a641ee1fca10577a89b42bb5fc7f8cffb119bef0ce8c1fb130d452b8c6f86"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.863258 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"7eb10c7f12c4c12fc6b146ae07090a3f53a77866aae9b926e9eff03e5f015a83"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866461 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"997b09ce15268eb52254feb45e352b0775a5b304238dd427e95511fadc30437f"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.866477 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" event={"ID":"005aa352-e543-4bfd-ba57-b2cb37eb98f6","Type":"ContainerStarted","Data":"f775b10f2914a569d55f371fa181aefde15ad7b24570978e11dff65571da6ac6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.869022 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" event={"ID":"62cfebd6-02c7-4437-9be3-60aec3d91f1b","Type":"ContainerStarted","Data":"e5102d319ba62a16ecff518a9d80d5153a5bbf1e8072c11bbdae5c28c3135a87"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.869076 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" event={"ID":"62cfebd6-02c7-4437-9be3-60aec3d91f1b","Type":"ContainerStarted","Data":"60a6eb0ef06edadd6ed1792649a499a5f18a56a5b6dba711cb4774df3958f0c6"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.870608 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerStarted","Data":"c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.870643 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerStarted","Data":"3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.872055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" event={"ID":"18ee9403-18d0-4528-a3cd-82ea0dba3576","Type":"ContainerStarted","Data":"6ae9c66a898663b3f60f7204d34e9d3068688099ce5cad9cfd3c1cdacf23426e"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.872112 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" event={"ID":"18ee9403-18d0-4528-a3cd-82ea0dba3576","Type":"ContainerStarted","Data":"9f4b0f51776778636c85072bc7cda10c84357ec646f4cf6d675f9c5d8dc10b14"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.876727 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"b6a42c2328f1173043ca079c0377bb1dc407c84334f163694ab8d1c6757125b8"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.877368 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881611 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerStarted","Data":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881700 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerStarted","Data":"7bc05f9957f09f27cee7504d54470ecd9c12fb4c5e2801caea1078ac4942d85e"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.881729 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.883899 5121 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-m7q6l container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.883992 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.885925 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerStarted","Data":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.886322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerStarted","Data":"8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8"} Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.886351 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.887461 5121 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-w48qb container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.887501 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.896788 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.921990 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.937135 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.957467 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.976718 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 18 00:10:30 crc kubenswrapper[5121]: I0218 00:10:30.996942 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.016435 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.037252 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.057026 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.077094 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.097677 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.117442 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.138333 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.157752 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.187152 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.198088 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.217045 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.236133 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.256985 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.276752 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.298837 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.323464 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.338185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349146 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349288 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349342 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349451 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349499 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349731 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349796 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349862 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.349931 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350063 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350125 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350156 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350212 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350238 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350265 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350311 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.350384 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.351200 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-auth-proxy-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.351752 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db5b1911-47a0-41f1-b793-924df4056e20-signing-cabundle\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.352560 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.352940 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ead99f6-fe0b-418e-b25c-06d177458b2a-config\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.353440 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-config\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.355416 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/4ead99f6-fe0b-418e-b25c-06d177458b2a-machine-approver-tls\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356276 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db5b1911-47a0-41f1-b793-924df4056e20-signing-key\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.356771 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-srv-cert\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.357839 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.358482 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/21a8987a-ee46-4b59-b949-55032c182585-srv-cert\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.360775 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.376418 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.388744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-stats-auth\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.401314 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.416468 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.437619 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.447632 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-default-certificate\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.457336 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.464288 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-metrics-certs\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.477443 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.484935 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-service-ca-bundle\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.501440 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.516470 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.516777 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.537335 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.554516 5121 request.go:752] "Waited before sending request" delay="1.982224412s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.557898 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.576839 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.597031 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.617111 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.637257 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.657510 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.661565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b46e61bd-a38a-4792-98ee-067e427538c9-config-volume\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.677586 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.683605 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b46e61bd-a38a-4792-98ee-067e427538c9-metrics-tls\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.697342 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.701331 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.716685 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.736882 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.760048 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.765502 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-cert\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.776674 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.855456 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"collect-profiles-29522880-b2sfp\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.874743 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69df6480-3d02-4112-b8db-3507dd5a5f49-kube-api-access\") pod \"kube-apiserver-operator-575994946d-mm659\" (UID: \"69df6480-3d02-4112-b8db-3507dd5a5f49\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.892064 5121 generic.go:358] "Generic (PLEG): container finished" podID="4fa50e1e-3367-4e1b-93fb-aea8f3220c81" containerID="d046876f99826d3366167a71fbd936d5cf246d1cf90adccb1d510e846482c883" exitCode=0 Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.892117 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerDied","Data":"d046876f99826d3366167a71fbd936d5cf246d1cf90adccb1d510e846482c883"} Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.895192 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" event={"ID":"e173473c-5d44-44cf-833c-2a88d061dd9f","Type":"ContainerStarted","Data":"0d22ca1938f30c62be398f37a97feb86853ed24566690cb1f3b80662ae71ea89"} Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.903202 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f25sc\" (UniqueName: \"kubernetes.io/projected/bbdb0e57-487f-44df-bfea-01e173ebb1e3-kube-api-access-f25sc\") pod \"console-operator-67c89758df-qmtl4\" (UID: \"bbdb0e57-487f-44df-bfea-01e173ebb1e3\") " pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.936621 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.936938 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2srpv\" (UniqueName: \"kubernetes.io/projected/5e287aff-1485-4233-8648-ece2622ccf37-kube-api-access-2srpv\") pod \"dns-operator-799b87ffcd-z2wj9\" (UID: \"5e287aff-1485-4233-8648-ece2622ccf37\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.940407 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.953049 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.959465 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.963019 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wmm9\" (UniqueName: \"kubernetes.io/projected/49d45bda-ec47-407b-b527-c7267c3825c0-kube-api-access-8wmm9\") pod \"cluster-image-registry-operator-86c45576b9-dsqn5\" (UID: \"49d45bda-ec47-407b-b527-c7267c3825c0\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:31 crc kubenswrapper[5121]: I0218 00:10:31.975413 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0720e131-2f16-4741-bef5-fa81e51085a8-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-c8wq7\" (UID: \"0720e131-2f16-4741-bef5-fa81e51085a8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.001295 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfjf5\" (UniqueName: \"kubernetes.io/projected/0e4dec16-09b2-4707-a2f6-f502d32b4fb8-kube-api-access-mfjf5\") pod \"openshift-config-operator-5777786469-zvwwb\" (UID: \"0e4dec16-09b2-4707-a2f6-f502d32b4fb8\") " pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.018293 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-smtxj\" (UniqueName: \"kubernetes.io/projected/9c0d1702-8700-443c-9bf2-afa4222bd41c-kube-api-access-smtxj\") pod \"openshift-controller-manager-operator-686468bdd5-v6n92\" (UID: \"9c0d1702-8700-443c-9bf2-afa4222bd41c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.029730 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.034917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjcq\" (UniqueName: \"kubernetes.io/projected/33af1cb9-6bf3-4a05-8884-c2e1ae482ada-kube-api-access-7zjcq\") pod \"authentication-operator-7f5c659b84-c95sd\" (UID: \"33af1cb9-6bf3-4a05-8884-c2e1ae482ada\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.052321 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8gh7\" (UniqueName: \"kubernetes.io/projected/a3597721-7184-4c2a-8050-ccec6fa345e4-kube-api-access-h8gh7\") pod \"cluster-samples-operator-6b564684c8-sswjl\" (UID: \"a3597721-7184-4c2a-8050-ccec6fa345e4\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.073363 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-shnfg\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-kube-api-access-shnfg\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.100153 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxcvj\" (UniqueName: \"kubernetes.io/projected/8724461b-b94b-4f4a-9c9f-4a131b9e02c2-kube-api-access-jxcvj\") pod \"router-default-68cf44c8b8-mvs4c\" (UID: \"8724461b-b94b-4f4a-9c9f-4a131b9e02c2\") " pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.121850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-chw74\" (UniqueName: \"kubernetes.io/projected/4cab190f-d97b-45f5-8875-eb96fc357e91-kube-api-access-chw74\") pod \"machine-config-controller-f9cdd68f7-bw9b4\" (UID: \"4cab190f-d97b-45f5-8875-eb96fc357e91\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.134082 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmqm\" (UniqueName: \"kubernetes.io/projected/efe976a0-6ea6-4283-8b7c-97caa4f2111b-kube-api-access-kfmqm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-djfbc\" (UID: \"efe976a0-6ea6-4283-8b7c-97caa4f2111b\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.156516 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.164911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.170361 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d3e4d34-c74d-4572-aca8-da4c6c85fa79-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-km69x\" (UID: \"0d3e4d34-c74d-4572-aca8-da4c6c85fa79\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.170697 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.177102 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.199136 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbqzj\" (UniqueName: \"kubernetes.io/projected/4ead99f6-fe0b-418e-b25c-06d177458b2a-kube-api-access-gbqzj\") pod \"machine-approver-54c688565-jxkj2\" (UID: \"4ead99f6-fe0b-418e-b25c-06d177458b2a\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.204875 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4lhr\" (UniqueName: \"kubernetes.io/projected/1c0a3ab2-4ddb-4472-af47-3471a18714be-kube-api-access-l4lhr\") pod \"packageserver-7d4fc7d867-jp5zf\" (UID: \"1c0a3ab2-4ddb-4472-af47-3471a18714be\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.224971 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.230228 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.233991 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgwqw\" (UniqueName: \"kubernetes.io/projected/aa83ca9d-be38-4710-ace7-571b9e8b43dc-kube-api-access-vgwqw\") pod \"kube-storage-version-migrator-operator-565b79b866-vqrnq\" (UID: \"aa83ca9d-be38-4710-ace7-571b9e8b43dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.249589 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzxl6\" (UniqueName: \"kubernetes.io/projected/dbdd0c4c-8844-44cd-885a-c2b40db8dcb4-kube-api-access-wzxl6\") pod \"console-64d44f6ddf-7b8sg\" (UID: \"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4\") " pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.273830 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.274194 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.299375 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.304017 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"marketplace-operator-547dbd544d-78c6t\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.304268 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.310193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvbsm\" (UniqueName: \"kubernetes.io/projected/6d918a65-a99e-41a8-97de-51c2cc74b24b-kube-api-access-pvbsm\") pod \"downloads-747b44746d-mkw5h\" (UID: \"6d918a65-a99e-41a8-97de-51c2cc74b24b\") " pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.312513 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.314281 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"cni-sysctl-allowlist-ds-jc5sl\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.337312 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.345502 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vnz\" (UniqueName: \"kubernetes.io/projected/1c378e40-50b9-49d3-bbdf-f9cc1e6baaac-kube-api-access-z7vnz\") pod \"ingress-canary-h64q4\" (UID: \"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac\") " pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.345811 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.357412 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wj4v\" (UniqueName: \"kubernetes.io/projected/21a8987a-ee46-4b59-b949-55032c182585-kube-api-access-7wj4v\") pod \"olm-operator-5cdf44d969-htdrd\" (UID: \"21a8987a-ee46-4b59-b949-55032c182585\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.374857 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.389613 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.396927 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wf2\" (UniqueName: \"kubernetes.io/projected/4b1e56fa-e38b-48bc-9768-0bc82aca0a0c-kube-api-access-r7wf2\") pod \"package-server-manager-77f986bd66-zsz4p\" (UID: \"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.397411 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kxpp\" (UniqueName: \"kubernetes.io/projected/a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe-kube-api-access-5kxpp\") pod \"catalog-operator-75ff9f647d-wwrwg\" (UID: \"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.413472 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wxwv\" (UniqueName: \"kubernetes.io/projected/db5b1911-47a0-41f1-b793-924df4056e20-kube-api-access-8wxwv\") pod \"service-ca-74545575db-vlht9\" (UID: \"db5b1911-47a0-41f1-b793-924df4056e20\") " pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.413747 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.416247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.421774 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-vlht9" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.422282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-rqnfg\" (UID: \"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.431952 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.438404 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.446564 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.462527 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.463899 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-z2wj9"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.464614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8prxr\" (UniqueName: \"kubernetes.io/projected/0dc8a8e0-dd61-46e8-92e0-7f90eceebf36-kube-api-access-8prxr\") pod \"machine-config-operator-67c9d58cbb-8wm6t\" (UID: \"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.467993 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr95z\" (UniqueName: \"kubernetes.io/projected/a41b6648-bba2-4f34-b49b-f95db5ff9426-kube-api-access-sr95z\") pod \"csi-hostpathplugin-v9jcr\" (UID: \"a41b6648-bba2-4f34-b49b-f95db5ff9426\") " pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.472508 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.480305 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wstdf\" (UniqueName: \"kubernetes.io/projected/b46e61bd-a38a-4792-98ee-067e427538c9-kube-api-access-wstdf\") pod \"dns-default-rsbpp\" (UID: \"b46e61bd-a38a-4792-98ee-067e427538c9\") " pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.486819 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.502367 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.518499 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.522922 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.539172 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.542165 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69df6480_3d02_4112_b8db_3507dd5a5f49.slice/crio-bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363 WatchSource:0}: Error finding container bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363: Status 404 returned error can't find the container with id bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363 Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.550834 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-h64q4" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585738 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585808 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585850 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585870 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585901 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585918 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.585941 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586025 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586050 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586068 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586103 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586121 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586139 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586181 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586195 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.586258 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.589966 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.598008 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.598412 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.098397126 +0000 UTC m=+116.612854861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.608527 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9acc779e_6e10_4bc7_851f_c14ba843c057.slice/crio-ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e WatchSource:0}: Error finding container ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e: Status 404 returned error can't find the container with id ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.619594 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" Feb 18 00:10:32 crc kubenswrapper[5121]: W0218 00:10:32.641348 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e287aff_1485_4233_8648_ece2622ccf37.slice/crio-b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869 WatchSource:0}: Error finding container b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869: Status 404 returned error can't find the container with id b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869 Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.664110 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.669765 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" podStartSLOduration=93.669515394 podStartE2EDuration="1m33.669515394s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:32.661579723 +0000 UTC m=+116.176037468" watchObservedRunningTime="2026-02-18 00:10:32.669515394 +0000 UTC m=+116.183973139" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689285 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689770 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.689806 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.698612 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.198589539 +0000 UTC m=+116.713047274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.699520 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.699688 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700083 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700108 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.700195 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.702375 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.702914 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.202898254 +0000 UTC m=+116.717355989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704035 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704094 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704147 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704314 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704436 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704460 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.704925 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.705002 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.705977 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.709899 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.721177 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c318bc6-d06b-45e4-a256-a74767b40a60-config\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.721493 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl"] Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.755500 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c318bc6-d06b-45e4-a256-a74767b40a60-serving-cert\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756161 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-node-bootstrap-token\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756657 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.756895 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.758913 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpcv4\" (UniqueName: \"kubernetes.io/projected/38e2fa84-50e3-4aa5-9269-6e423103dbe2-kube-api-access-hpcv4\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.759980 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/38e2fa84-50e3-4aa5-9269-6e423103dbe2-webhook-certs\") pod \"multus-admission-controller-69db94689b-p8ssx\" (UID: \"38e2fa84-50e3-4aa5-9269-6e423103dbe2\") " pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.766402 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.766967 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-certs\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.809455 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.810347 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.31032069 +0000 UTC m=+116.824778425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.810742 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.811024 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.311017848 +0000 UTC m=+116.825475583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.819123 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2h9k\" (UniqueName: \"kubernetes.io/projected/7c318bc6-d06b-45e4-a256-a74767b40a60-kube-api-access-n2h9k\") pod \"service-ca-operator-5b9c976747-pblgm\" (UID: \"7c318bc6-d06b-45e4-a256-a74767b40a60\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.833099 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc2zm\" (UniqueName: \"kubernetes.io/projected/a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1-kube-api-access-lc2zm\") pod \"machine-config-server-vn45p\" (UID: \"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1\") " pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.837352 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.864260 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjn6m\" (UniqueName: \"kubernetes.io/projected/44329a91-5654-4584-9009-4ca6f7e45584-kube-api-access-mjn6m\") pod \"migrator-866fcbc849-lxtfd\" (UID: \"44329a91-5654-4584-9009-4ca6f7e45584\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.880406 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.881978 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.911756 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:32 crc kubenswrapper[5121]: E0218 00:10:32.912232 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.41219423 +0000 UTC m=+116.926651965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.922915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"1f2a145db7b768525fc203c9903cddb7b33069c700d92fe1f80c8c26d0d02829"} Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.954021 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.976490 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" event={"ID":"8724461b-b94b-4f4a-9c9f-4a131b9e02c2","Type":"ContainerStarted","Data":"93d25e22af9ffc3ca13c60846acc7c1b4739d1c9a575fe62e5fcf5bc43f3b946"} Feb 18 00:10:32 crc kubenswrapper[5121]: I0218 00:10:32.988360 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerStarted","Data":"ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.005939 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"7cf679dc87cc0f98dab0f8bb142577dba1f068fedf9d23c8c1b34c6b0f64ee77"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.007723 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"b01dd50a3fa3616dbc053781a77ec513f109fa4c6984ccff100f86d30e83b869"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.014009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.014416 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.514398757 +0000 UTC m=+117.028856492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.017705 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" event={"ID":"69df6480-3d02-4112-b8db-3507dd5a5f49","Type":"ContainerStarted","Data":"bcd8b5b4c69d74026c87a06820eeaf14bcc9eecee76f4bb070886e229fb9b363"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.021412 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" event={"ID":"9c0d1702-8700-443c-9bf2-afa4222bd41c","Type":"ContainerStarted","Data":"ee06a2eaa9d9a14b66b7bb3791daece348f4bc674bc37c8a7b0df57f24bbdbbe"} Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.039596 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.081964 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vn45p" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.116173 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.116850 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.616827961 +0000 UTC m=+117.131285706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.162180 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-trwcb" podStartSLOduration=95.162156824 podStartE2EDuration="1m35.162156824s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.133112775 +0000 UTC m=+116.647570510" watchObservedRunningTime="2026-02-18 00:10:33.162156824 +0000 UTC m=+116.676614559" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.164087 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-zvwwb"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.210953 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podStartSLOduration=94.210935337 podStartE2EDuration="1m34.210935337s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.17426008 +0000 UTC m=+116.688717825" watchObservedRunningTime="2026-02-18 00:10:33.210935337 +0000 UTC m=+116.725393082" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.218926 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.219373 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.719355117 +0000 UTC m=+117.233812852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.293493 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-hfw2k" podStartSLOduration=94.293470182 podStartE2EDuration="1m34.293470182s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.292835735 +0000 UTC m=+116.807293470" watchObservedRunningTime="2026-02-18 00:10:33.293470182 +0000 UTC m=+116.807927927" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.320205 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.323154 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.823129586 +0000 UTC m=+117.337587321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.422132 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.423769 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:33.923753533 +0000 UTC m=+117.438211338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.526019 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.526349 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.0263242 +0000 UTC m=+117.540781925 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: W0218 00:10:33.527734 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4dec16_09b2_4707_a2f6_f502d32b4fb8.slice/crio-ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344 WatchSource:0}: Error finding container ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344: Status 404 returned error can't find the container with id ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344 Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.630054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.630429 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.130415417 +0000 UTC m=+117.644873152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.651174 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podStartSLOduration=95.651154568 podStartE2EDuration="1m35.651154568s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.615734024 +0000 UTC m=+117.130191779" watchObservedRunningTime="2026-02-18 00:10:33.651154568 +0000 UTC m=+117.165612303" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.730975 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.731413 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.231396823 +0000 UTC m=+117.745854558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.812851 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podStartSLOduration=95.812829639 podStartE2EDuration="1m35.812829639s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:33.811940395 +0000 UTC m=+117.326398140" watchObservedRunningTime="2026-02-18 00:10:33.812829639 +0000 UTC m=+117.327287374" Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.839427 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.840106 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.34009245 +0000 UTC m=+117.854550185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.854609 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-7b8sg"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.888158 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.895638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-qmtl4"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.917436 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7"] Feb 18 00:10:33 crc kubenswrapper[5121]: I0218 00:10:33.941594 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:33 crc kubenswrapper[5121]: E0218 00:10:33.941961 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.441943269 +0000 UTC m=+117.956401004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.022913 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbdd0c4c_8844_44cd_885a_c2b40db8dcb4.slice/crio-3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509 WatchSource:0}: Error finding container 3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509: Status 404 returned error can't find the container with id 3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.045257 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.045710 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.545694237 +0000 UTC m=+118.060151972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.047984 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbdb0e57_487f_44df_bfea_01e173ebb1e3.slice/crio-e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1 WatchSource:0}: Error finding container e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1: Status 404 returned error can't find the container with id e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.058695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerStarted","Data":"a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.059577 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29522880-hmpf4" podStartSLOduration=96.059564929 podStartE2EDuration="1m36.059564929s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.010927429 +0000 UTC m=+117.525385174" watchObservedRunningTime="2026-02-18 00:10:34.059564929 +0000 UTC m=+117.574022674" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.064724 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"f510b314c37ea5aa5b6f533d4de607061ea409f8191aa27663cd155956f43fdd"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.064777 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"16ab1002607b82e28ce64fa66aeb70d32c5d21971292cd555f66e076a7ee878e"} Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.073940 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0720e131_2f16_4741_bef5_fa81e51085a8.slice/crio-be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5 WatchSource:0}: Error finding container be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5: Status 404 returned error can't find the container with id be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.077118 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerStarted","Data":"ed622b43aa97db2563db29c1e6a71d2e9f54835f54fb1935fd62312de60e2344"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.083722 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vn45p" event={"ID":"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1","Type":"ContainerStarted","Data":"e8758536e80a5ee0cdc87084ec4fe6e239c7c2b24774b2a2c3c8af7ce73e9cd6"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.083805 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vn45p" event={"ID":"a0b8bec6-390d-4bf8-b54b-a6b4d0b790c1","Type":"ContainerStarted","Data":"c6638e39589d920dc2d02de998bd9ec9dd558f4591b874cfddb785f7e59686b5"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.093411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"524fda9e36c5660caad1709d2992481c8479ec11e86aef492234d14b781402b4"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.105588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" event={"ID":"9c0d1702-8700-443c-9bf2-afa4222bd41c","Type":"ContainerStarted","Data":"33d1d8377d58687970fadd809be92b41d6b6cb2794a75052ad252321b9d10942"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.135695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerStarted","Data":"ce83ab25e1e8e9f955af7b1409e400ceb125028d31573c59d7119d8ace62ac10"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.148861 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.152072 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.652051513 +0000 UTC m=+118.166509248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.192181 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7b8sg" event={"ID":"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4","Type":"ContainerStarted","Data":"3a1b9e44606bee9c559c4e80d6bdd1ba9fa2b687a43322c1e5a9c48d9604d509"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.198441 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" event={"ID":"8724461b-b94b-4f4a-9c9f-4a131b9e02c2","Type":"ContainerStarted","Data":"e39325a0310e88546bb1492440a0e4d5c5e0531d5beedc2393f5dc9e390153bb"} Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.235836 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.237893 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.248083 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.252282 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.252777 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.752762862 +0000 UTC m=+118.267220587 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.253976 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-rsbpp"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.277715 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.278292 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.285762 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-mkw5h"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.305950 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-v9jcr"] Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.312566 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c3804ba_f1a0_4e30_9bfb_a6ebc39f7cd1.slice/crio-5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96 WatchSource:0}: Error finding container 5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96: Status 404 returned error can't find the container with id 5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.316610 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.320283 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.320473 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-j5zbs" podStartSLOduration=96.320442888 podStartE2EDuration="1m36.320442888s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.318010546 +0000 UTC m=+117.832468281" watchObservedRunningTime="2026-02-18 00:10:34.320442888 +0000 UTC m=+117.834900643" Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.347129 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb46e61bd_a38a_4792_98ee_067e427538c9.slice/crio-40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827 WatchSource:0}: Error finding container 40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827: Status 404 returned error can't find the container with id 40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827 Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.353048 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.354716 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.854693263 +0000 UTC m=+118.369150998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.436111 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.454563 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.455513 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.456415 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:34.956387947 +0000 UTC m=+118.470845692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.489099 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.489746 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.495409 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-vlht9"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.498356 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p8ssx"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.501462 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.518189 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.522955 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-h64q4"] Feb 18 00:10:34 crc kubenswrapper[5121]: W0218 00:10:34.526768 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b1e56fa_e38b_48bc_9768_0bc82aca0a0c.slice/crio-6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c WatchSource:0}: Error finding container 6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c: Status 404 returned error can't find the container with id 6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.528870 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.530590 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.559321 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.559690 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.059673873 +0000 UTC m=+118.574131608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.578388 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm"] Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.614170 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podStartSLOduration=96.614151125 podStartE2EDuration="1m36.614151125s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.592691255 +0000 UTC m=+118.107149190" watchObservedRunningTime="2026-02-18 00:10:34.614151125 +0000 UTC m=+118.128608850" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.615447 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-v6n92" podStartSLOduration=96.615441259 podStartE2EDuration="1m36.615441259s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.613817937 +0000 UTC m=+118.128275682" watchObservedRunningTime="2026-02-18 00:10:34.615441259 +0000 UTC m=+118.129898994" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.661151 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.664336 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.164317105 +0000 UTC m=+118.678775020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.685559 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" podStartSLOduration=96.685529118 podStartE2EDuration="1m36.685529118s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.680944619 +0000 UTC m=+118.195402354" watchObservedRunningTime="2026-02-18 00:10:34.685529118 +0000 UTC m=+118.199986853" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.701631 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vn45p" podStartSLOduration=5.701612749 podStartE2EDuration="5.701612749s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:34.700881629 +0000 UTC m=+118.215339364" watchObservedRunningTime="2026-02-18 00:10:34.701612749 +0000 UTC m=+118.216070484" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.765899 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.766419 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.266218945 +0000 UTC m=+118.780676700 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.766856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.769005 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.268993838 +0000 UTC m=+118.783451573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.869574 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.870095 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.370048905 +0000 UTC m=+118.884506640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.870933 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.871364 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.371356589 +0000 UTC m=+118.885814324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.969269 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.969349 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.973973 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.974292 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.474265765 +0000 UTC m=+118.988723500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.976452 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:34 crc kubenswrapper[5121]: E0218 00:10:34.978373 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.47829592 +0000 UTC m=+118.992753655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:34 crc kubenswrapper[5121]: I0218 00:10:34.981850 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.072060 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-jrx99" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.081692 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.082233 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.582207563 +0000 UTC m=+119.096665298 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.167022 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.173100 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:35 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:35 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:35 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.173153 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.185518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.187581 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.687564564 +0000 UTC m=+119.202022299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.289058 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.289472 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.789417451 +0000 UTC m=+119.303875196 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.295791 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.296306 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.796280771 +0000 UTC m=+119.310738506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.334707 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e4dec16-09b2-4707-a2f6-f502d32b4fb8" containerID="98443954ec3593da1274d3bcef771583dcff25c4f81e7b3cbcc6a0883e483dea" exitCode=0 Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.334846 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerDied","Data":"98443954ec3593da1274d3bcef771583dcff25c4f81e7b3cbcc6a0883e483dea"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.379175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" event={"ID":"7c318bc6-d06b-45e4-a256-a74767b40a60","Type":"ContainerStarted","Data":"ab1d925c968d47f2408727cc7fc2a524c16033995c8ea19954ecfa00650c8979"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.383532 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"7d83d07068fffeffa307275920be1250501974d71f35f484089eb2aafc67f81e"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.385307 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h64q4" event={"ID":"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac","Type":"ContainerStarted","Data":"fe6095032203997db84a66b4ac6bc9a60918837453b8552b5b8c37d8f2860fcb"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.393915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" event={"ID":"0d3e4d34-c74d-4572-aca8-da4c6c85fa79","Type":"ContainerStarted","Data":"5cfaa198d1c53ac88755f51dae35e88917dba5760df800689c0f2305b60bd633"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.398002 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.399987 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:35.899949677 +0000 UTC m=+119.414407412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.418617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" event={"ID":"33af1cb9-6bf3-4a05-8884-c2e1ae482ada","Type":"ContainerStarted","Data":"2bee31f82bdc0f2cc43d8731caa55b7da3880fa6605ae9956d41530ef635988c"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.418994 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" event={"ID":"33af1cb9-6bf3-4a05-8884-c2e1ae482ada","Type":"ContainerStarted","Data":"93d2a97bea82695deb6119a63daa54916d6a4b6bdd3fc7907dbd8e150f22ac5f"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.436538 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" event={"ID":"5e287aff-1485-4233-8648-ece2622ccf37","Type":"ContainerStarted","Data":"1339772cc48b1a0b2dbd5f15337cea2c465de890e8e947ce732ca120df093ee6"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.442915 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" event={"ID":"bbdb0e57-487f-44df-bfea-01e173ebb1e3","Type":"ContainerStarted","Data":"ac2750573ce29b122cf6b672117a9a63563998bc48c114f0b6d8de00c608c37a"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.442978 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" event={"ID":"bbdb0e57-487f-44df-bfea-01e173ebb1e3","Type":"ContainerStarted","Data":"e96361f936d12c0d6be79564bd6c4c7a8a1dfc9f3fb42a7070cd6a6e670d45c1"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.444333 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.449823 5121 patch_prober.go:28] interesting pod/console-operator-67c89758df-qmtl4 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.449902 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" podUID="bbdb0e57-487f-44df-bfea-01e173ebb1e3" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/readyz\": dial tcp 10.217.0.12:8443: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.451544 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" event={"ID":"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe","Type":"ContainerStarted","Data":"6cd544e195b0649cf1787498d849054212c475bf78760c818b8ede8cfdd0393a"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.482922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"fb3f0db1e232b21db1b3649c629c4cd2168577f8008392e223f109f67dcb7d1b"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.483277 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"6e98c1095e7d2a31fbb41a2881ca420c8be83284faf532444b2b37fd881d93d9"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.504911 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" podStartSLOduration=97.504870906 podStartE2EDuration="1m37.504870906s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.497192536 +0000 UTC m=+119.011650271" watchObservedRunningTime="2026-02-18 00:10:35.504870906 +0000 UTC m=+119.019328641" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.507971 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-c95sd" podStartSLOduration=97.507948426 podStartE2EDuration="1m37.507948426s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.451901973 +0000 UTC m=+118.966359728" watchObservedRunningTime="2026-02-18 00:10:35.507948426 +0000 UTC m=+119.022406161" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.519591 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.521935 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.02192069 +0000 UTC m=+119.536378425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.532149 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-z2wj9" podStartSLOduration=97.532130278 podStartE2EDuration="1m37.532130278s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.527134417 +0000 UTC m=+119.041592152" watchObservedRunningTime="2026-02-18 00:10:35.532130278 +0000 UTC m=+119.046588003" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.591289 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerStarted","Data":"1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.605249 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.620500 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.620803 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.120753771 +0000 UTC m=+119.635211506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.621267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.621833 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.121820879 +0000 UTC m=+119.636278614 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.629810 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"40348ef6ee8332b8612a9a3393e30f55eb8e0b8d5c4a00b362ed704c8d7ea827"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.643836 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podStartSLOduration=6.643812233 podStartE2EDuration="6.643812233s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.630396292 +0000 UTC m=+119.144854027" watchObservedRunningTime="2026-02-18 00:10:35.643812233 +0000 UTC m=+119.158269968" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.723533 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.724879 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.224839487 +0000 UTC m=+119.739297222 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.730455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"88cc256e548c340abf25e905fa09540c865609cc35ca0674f05bd495618cfee7"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.740770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"3bfd7d2df98a7f8f131b6a896227f3b6080d444d029c8e32b90399f1813c0a26"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.740834 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" event={"ID":"4ead99f6-fe0b-418e-b25c-06d177458b2a","Type":"ContainerStarted","Data":"b5adce8767a51e5bc604cfd80d2b7076394db921157752a65b785676c8d8f897"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.753331 5121 generic.go:358] "Generic (PLEG): container finished" podID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerID="a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63" exitCode=0 Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.753582 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerDied","Data":"a1262385f4cc216d17b492cfe05587103e0e9b5d3a5679bc058236c244b28b63"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.759502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" event={"ID":"efe976a0-6ea6-4283-8b7c-97caa4f2111b","Type":"ContainerStarted","Data":"06924237f5a12bb896de56c20ffcf59ee476f318eac0ef1cb36097a01118f830"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.770079 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" event={"ID":"a3597721-7184-4c2a-8050-ccec6fa345e4","Type":"ContainerStarted","Data":"b221f77b42431999856b0c1f1b2be6e67c1ba4b2b16da33f41cd28cd6a34fe03"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.787892 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.807030 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"6aa6d6abd9029292beaa999313391063e7dfc8560a9303041ad206ea141a32a3"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.819021 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-jxkj2" podStartSLOduration=97.818958835 podStartE2EDuration="1m37.818958835s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.797938886 +0000 UTC m=+119.312396631" watchObservedRunningTime="2026-02-18 00:10:35.818958835 +0000 UTC m=+119.333416570" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.823146 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" event={"ID":"4fa50e1e-3367-4e1b-93fb-aea8f3220c81","Type":"ContainerStarted","Data":"25c57dfacc7f0b438706aa06d3aa285e6b50e71a419c939596c433e834b52465"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.825457 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.827015 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.326976483 +0000 UTC m=+119.841434218 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.848510 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-sswjl" podStartSLOduration=97.848487926 podStartE2EDuration="1m37.848487926s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.846189055 +0000 UTC m=+119.360646790" watchObservedRunningTime="2026-02-18 00:10:35.848487926 +0000 UTC m=+119.362945671" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.852015 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.852078 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"a35c1a8554f97c336c169b9b7ab07394eb161632ed304015d160d6c0a71bba70"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.853317 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.859191 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"b024dc4376a64f651df0bc3a112fbec788ffb545ed121d64a800b2fd5c634f79"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863179 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" event={"ID":"49d45bda-ec47-407b-b527-c7267c3825c0","Type":"ContainerStarted","Data":"51961dc119cadddbf2dc9028d04b910997bdf07db6c6c353cd0c340132d26dc4"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863771 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-78c6t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.863828 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.867696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" event={"ID":"0720e131-2f16-4741-bef5-fa81e51085a8","Type":"ContainerStarted","Data":"c66d8c86b7cf91f2340bd4fcd17f8865efc3566d23d8bf8e1c6a3a7a22463807"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.867753 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" event={"ID":"0720e131-2f16-4741-bef5-fa81e51085a8","Type":"ContainerStarted","Data":"be7ca453b879c8ee76e05744e150341020b5cbd68dfc0db77f6f0025af74f6e5"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.884621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" event={"ID":"69df6480-3d02-4112-b8db-3507dd5a5f49","Type":"ContainerStarted","Data":"c08178fc805e104d9dd7741e337c5cef0fa68324bb63e67b395a6b82d2a6f76d"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.887600 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"7b11752f9095d2aca0ab2ad88d0eac8a8324d281d1ce6c2a4e0f148ea173c786"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.887644 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"5d2af12d2ff7434627a68d865f3738c24770f17305159afe789a69c226cf8d96"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.889722 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.889770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"b0ff9640da837eaf58669b3a6f94ba55e0318bc5c67cb59a44276b751785d59e"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.890529 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.894434 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" event={"ID":"aa83ca9d-be38-4710-ace7-571b9e8b43dc","Type":"ContainerStarted","Data":"d2dd15319c3c0d7b810774c2cd6f8ebc724de4699bc217763b9dc02b5c38099d"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.894468 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" event={"ID":"aa83ca9d-be38-4710-ace7-571b9e8b43dc","Type":"ContainerStarted","Data":"a501b55e6d2433284c8276cd7d104a50c452023038d145f65579b91148df8ee3"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.904164 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.904253 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.905180 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-7b8sg" event={"ID":"dbdd0c4c-8844-44cd-885a-c2b40db8dcb4","Type":"ContainerStarted","Data":"45d8169f3fe2de7c4be4ab685de632eb5e0feb714a533aeadf85ed267cb47308"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.909510 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"f42ca4ac8b9a4b25bc2fb0f29333360a49f661f685caa1d9b319924894d017a6"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.909569 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"6ee4221101fab5c42f17ef4746bf8beaf983a2e5eaec69fdc3cfd0b959faf78c"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.912427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-vlht9" event={"ID":"db5b1911-47a0-41f1-b793-924df4056e20","Type":"ContainerStarted","Data":"c7f55367ee840399ddf2795f7b6fc4b4849c5a6a6e4fa3704d578be904566f40"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.914979 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" event={"ID":"1c0a3ab2-4ddb-4472-af47-3471a18714be","Type":"ContainerStarted","Data":"d69dc2c05a7778796657145db3a981f3688e5f3673d5df17aee60bfd65526682"} Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.915746 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.926901 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928478 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-c8wq7" podStartSLOduration=96.928453463 podStartE2EDuration="1m36.928453463s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.900131124 +0000 UTC m=+119.414588879" watchObservedRunningTime="2026-02-18 00:10:35.928453463 +0000 UTC m=+119.442911218" Feb 18 00:10:35 crc kubenswrapper[5121]: E0218 00:10:35.928585 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.428553015 +0000 UTC m=+119.943010750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928621 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podStartSLOduration=96.928613427 podStartE2EDuration="1m36.928613427s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.927712233 +0000 UTC m=+119.442169978" watchObservedRunningTime="2026-02-18 00:10:35.928613427 +0000 UTC m=+119.443071162" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.928943 5121 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jp5zf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": dial tcp 10.217.0.16:5443: connect: connection refused" start-of-body= Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.929010 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podUID="1c0a3ab2-4ddb-4472-af47-3471a18714be" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": dial tcp 10.217.0.16:5443: connect: connection refused" Feb 18 00:10:35 crc kubenswrapper[5121]: I0218 00:10:35.932062 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" event={"ID":"21a8987a-ee46-4b59-b949-55032c182585","Type":"ContainerStarted","Data":"17b8ab3a0733a0bf822d019e6838ae09124d1851a1cb5d677a5de4f0211060c0"} Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030854 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030942 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.030981 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.031057 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.031210 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.067216 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.567195394 +0000 UTC m=+120.081653129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.067230 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" podStartSLOduration=98.067210895 podStartE2EDuration="1m38.067210895s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.061409703 +0000 UTC m=+119.575867438" watchObservedRunningTime="2026-02-18 00:10:36.067210895 +0000 UTC m=+119.581668620" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.073210 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.080754 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" podStartSLOduration=98.080730088 podStartE2EDuration="1m38.080730088s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:35.988086359 +0000 UTC m=+119.502544094" watchObservedRunningTime="2026-02-18 00:10:36.080730088 +0000 UTC m=+119.595187853" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.091685 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-mkw5h" podStartSLOduration=98.091641022 podStartE2EDuration="1m38.091641022s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.080616924 +0000 UTC m=+119.595074659" watchObservedRunningTime="2026-02-18 00:10:36.091641022 +0000 UTC m=+119.606098757" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.092627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.095524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.096107 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.149006 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-vqrnq" podStartSLOduration=97.148980239 podStartE2EDuration="1m37.148980239s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.123383481 +0000 UTC m=+119.637841216" watchObservedRunningTime="2026-02-18 00:10:36.148980239 +0000 UTC m=+119.663437984" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.150403 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podStartSLOduration=97.150395006 podStartE2EDuration="1m37.150395006s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.149563895 +0000 UTC m=+119.664021630" watchObservedRunningTime="2026-02-18 00:10:36.150395006 +0000 UTC m=+119.664852751" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.178497 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.178901 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.180529 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.680508822 +0000 UTC m=+120.194966557 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.186687 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b49811f-e44a-43e9-80e6-15fcc9ed145f-metrics-certs\") pod \"network-metrics-daemon-mlvtl\" (UID: \"5b49811f-e44a-43e9-80e6-15fcc9ed145f\") " pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.195431 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:36 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:36 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:36 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.195561 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.206975 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-7b8sg" podStartSLOduration=98.206953782 podStartE2EDuration="1m38.206953782s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.180262705 +0000 UTC m=+119.694720440" watchObservedRunningTime="2026-02-18 00:10:36.206953782 +0000 UTC m=+119.721411517" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.273355 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.280814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.280995 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.281226 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.781207971 +0000 UTC m=+120.295665706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.302722 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mlvtl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.315858 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.383998 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.384328 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.884304221 +0000 UTC m=+120.398761956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.488773 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.489433 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:36.989398535 +0000 UTC m=+120.503856270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.489662 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-mm659" podStartSLOduration=98.489617821 podStartE2EDuration="1m38.489617821s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:36.206828909 +0000 UTC m=+119.721286644" watchObservedRunningTime="2026-02-18 00:10:36.489617821 +0000 UTC m=+120.004075586" Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.490912 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.594252 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.594718 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.094694284 +0000 UTC m=+120.609152019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.695841 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.697257 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.19723509 +0000 UTC m=+120.711692825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.801328 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.801946 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.301920093 +0000 UTC m=+120.816377828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: W0218 00:10:36.847832 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac WatchSource:0}: Error finding container d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac: Status 404 returned error can't find the container with id d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.910297 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:36 crc kubenswrapper[5121]: E0218 00:10:36.911791 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.41177153 +0000 UTC m=+120.926229265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.917047 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mlvtl"] Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.976374 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" event={"ID":"0e4dec16-09b2-4707-a2f6-f502d32b4fb8","Type":"ContainerStarted","Data":"ddb96af64af9a1b96dc503affdf84bb0f278e8b4d80c1d62e954412f25b0acb9"} Feb 18 00:10:36 crc kubenswrapper[5121]: I0218 00:10:36.977596 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.008329 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" podStartSLOduration=99.00829679 podStartE2EDuration="1m39.00829679s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.003778002 +0000 UTC m=+120.518235747" watchObservedRunningTime="2026-02-18 00:10:37.00829679 +0000 UTC m=+120.522754525" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.013579 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" event={"ID":"7c318bc6-d06b-45e4-a256-a74767b40a60","Type":"ContainerStarted","Data":"015e8f11d2a9a79b540e221801bbe33065f9504fa551ab77e9a2334adfc58dbe"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.017174 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.018065 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.518047464 +0000 UTC m=+121.032505199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.024456 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"710e9623a093a8507c8d2d24970de5d7231d0839b719a7e44171780f1f1d07fa"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.071232 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-h64q4" event={"ID":"1c378e40-50b9-49d3-bbdf-f9cc1e6baaac","Type":"ContainerStarted","Data":"ae75c7dab40e2507c764d37a0a076d3421de6c93481d95347f5050699809a855"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.081740 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" event={"ID":"0d3e4d34-c74d-4572-aca8-da4c6c85fa79","Type":"ContainerStarted","Data":"4842ad275445dc936d507a93e417969263d66f2e2f4b36fcd33f63046f26aacd"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.087427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" event={"ID":"a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe","Type":"ContainerStarted","Data":"387d0d0e4dd13a423b159b27672d061a0fad21db163e790a174dd6baf0cf05ac"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.092021 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.096088 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" event={"ID":"4cab190f-d97b-45f5-8875-eb96fc357e91","Type":"ContainerStarted","Data":"97239879edff5b2ac7dd189ded9fbf06e0ae356f38969dee512df23c1be4d1c3"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.099765 5121 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-wwrwg container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.099817 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" podUID="a1a85e71-3dac-4c4a-b8f7-f5c8b08f6dfe" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.103916 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"d399d8db6c76d9d41b85c00f9ffcea1e2fcea16b9a2ac7a70a5139a086d0ec9d"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.107343 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"ba9a9b208a0efb7f38aa86e6f9a71546ec7e591cdc48b4b85faa24567db1bdd6"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.107368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" event={"ID":"0dc8a8e0-dd61-46e8-92e0-7f90eceebf36","Type":"ContainerStarted","Data":"396fa4298ee473a268791b8a8ed6dbe4ed0b30abb84ff829b3e6d36594a3e1d0"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.109616 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-km69x" podStartSLOduration=98.109607854 podStartE2EDuration="1m38.109607854s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.108207757 +0000 UTC m=+120.622665492" watchObservedRunningTime="2026-02-18 00:10:37.109607854 +0000 UTC m=+120.624065599" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.109966 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-pblgm" podStartSLOduration=98.109959924 podStartE2EDuration="1m38.109959924s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.037951584 +0000 UTC m=+120.552409319" watchObservedRunningTime="2026-02-18 00:10:37.109959924 +0000 UTC m=+120.624417659" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.118318 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.122311 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.622291045 +0000 UTC m=+121.136748780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.132101 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" event={"ID":"efe976a0-6ea6-4283-8b7c-97caa4f2111b","Type":"ContainerStarted","Data":"6ef31106189de0016484b234bfa9963eba9e0e03bcec3315c0b284f6645a1155"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.138455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"21deed03f27e9b52140c8fb82565a47e4f166478daaff6384a3266fab96d902a"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.138484 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" event={"ID":"44329a91-5654-4584-9009-4ca6f7e45584","Type":"ContainerStarted","Data":"1001fe23871e1464047c246f8246e9e7404b6b2c8a9b6e9766fac02ed97cd93b"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.173394 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" podStartSLOduration=98.173377549 podStartE2EDuration="1m38.173377549s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.15272455 +0000 UTC m=+120.667182295" watchObservedRunningTime="2026-02-18 00:10:37.173377549 +0000 UTC m=+120.687835284" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.174608 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-8wm6t" podStartSLOduration=98.174600821 podStartE2EDuration="1m38.174600821s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.172210238 +0000 UTC m=+120.686667973" watchObservedRunningTime="2026-02-18 00:10:37.174600821 +0000 UTC m=+120.689058556" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.184989 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:37 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:37 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:37 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.185094 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.224121 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.225358 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46840: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.226343 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.726312371 +0000 UTC m=+121.240770276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.226370 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-dsqn5" event={"ID":"49d45bda-ec47-407b-b527-c7267c3825c0","Type":"ContainerStarted","Data":"1cd5861ec018e05961d939c2b91ff79cb20531e3dce160dc5b69a5fbb2d5f91e"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.245822 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-bw9b4" podStartSLOduration=98.24580045 podStartE2EDuration="1m38.24580045s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.206967776 +0000 UTC m=+120.721425531" watchObservedRunningTime="2026-02-18 00:10:37.24580045 +0000 UTC m=+120.760258195" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.246305 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-h64q4" podStartSLOduration=8.246297162 podStartE2EDuration="8.246297162s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.244341032 +0000 UTC m=+120.758798767" watchObservedRunningTime="2026-02-18 00:10:37.246297162 +0000 UTC m=+120.760754917" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.283976 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-lxtfd" podStartSLOduration=98.283953865 podStartE2EDuration="1m38.283953865s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.283277217 +0000 UTC m=+120.797734962" watchObservedRunningTime="2026-02-18 00:10:37.283953865 +0000 UTC m=+120.798411620" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.346181 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46852: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.351702 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.354764 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.854741343 +0000 UTC m=+121.369199078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.357215 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" event={"ID":"6c3804ba-f1a0-4e30-9bfb-a6ebc39f7cd1","Type":"ContainerStarted","Data":"4beef904caca877837f6e2e7a5ac7471338ece8101a91c76e505e707a7d33289"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.422055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" event={"ID":"4b1e56fa-e38b-48bc-9768-0bc82aca0a0c","Type":"ContainerStarted","Data":"e172a27cddb6ef22e49a42e49a7430ef15bbede061251331fb2bbcb6ab30630e"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.459997 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.461725 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:37.961707385 +0000 UTC m=+121.476165120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.500410 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.557801 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-djfbc" podStartSLOduration=98.557781073 podStartE2EDuration="1m38.557781073s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.411098934 +0000 UTC m=+120.925556669" watchObservedRunningTime="2026-02-18 00:10:37.557781073 +0000 UTC m=+121.072238808" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.565128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.566144 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.066132341 +0000 UTC m=+121.580590076 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.569990 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-vlht9" event={"ID":"db5b1911-47a0-41f1-b793-924df4056e20","Type":"ContainerStarted","Data":"ee075cfb8671768d603c3a02c902d05e21c9cb405d42244743e89f42d92d1e4a"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.580164 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49336: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.632303 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" event={"ID":"1c0a3ab2-4ddb-4472-af47-3471a18714be","Type":"ContainerStarted","Data":"8e43e078e00e9c1d1b2445ae3d01ba0dfa4f6d80e11a2bf4b3d54b230b7fbac6"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.666857 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.668173 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.168154764 +0000 UTC m=+121.682612499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.691339 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49348: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.692239 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" event={"ID":"21a8987a-ee46-4b59-b949-55032c182585","Type":"ContainerStarted","Data":"8cbc1c7c92dd9496c2b47a9860df67a282b9650526235404fe6a9388039430b8"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.693379 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.710804 5121 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-htdrd container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.710909 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" podUID="21a8987a-ee46-4b59-b949-55032c182585" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.720377 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"d9bda4aeb5bb8eb31750c943371a607cd7c729064f89596ee7c3afc738cd1eac"} Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.727734 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-rqnfg" podStartSLOduration=99.727715839 podStartE2EDuration="1m39.727715839s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:37.726297292 +0000 UTC m=+121.240755027" watchObservedRunningTime="2026-02-18 00:10:37.727715839 +0000 UTC m=+121.242173574" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.729758 5121 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-78c6t container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.729816 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.731845 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.731932 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.769042 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.771634 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.271619605 +0000 UTC m=+121.786077340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.830388 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49358: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.869936 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.871973 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.371955944 +0000 UTC m=+121.886413679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.932581 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49366: no serving certificate available for the kubelet" Feb 18 00:10:37 crc kubenswrapper[5121]: I0218 00:10:37.974660 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:37 crc kubenswrapper[5121]: E0218 00:10:37.975055 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.475040694 +0000 UTC m=+121.989498429 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.023785 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-vlht9" podStartSLOduration=99.023758016 podStartE2EDuration="1m39.023758016s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.02313758 +0000 UTC m=+121.537595315" watchObservedRunningTime="2026-02-18 00:10:38.023758016 +0000 UTC m=+121.538215751" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.037183 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49370: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.060558 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" podStartSLOduration=99.060533247 podStartE2EDuration="1m39.060533247s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.057291522 +0000 UTC m=+121.571749257" watchObservedRunningTime="2026-02-18 00:10:38.060533247 +0000 UTC m=+121.574990992" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.076350 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.076628 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.576612546 +0000 UTC m=+122.091070281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.177856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.178404 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.678381473 +0000 UTC m=+122.192839208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.180420 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:38 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:38 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:38 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.180502 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.192806 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49374: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.280907 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.281149 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.281254 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.781224097 +0000 UTC m=+122.295681842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.308743 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" podStartSLOduration=99.308716155 podStartE2EDuration="1m39.308716155s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.118473059 +0000 UTC m=+121.632930804" watchObservedRunningTime="2026-02-18 00:10:38.308716155 +0000 UTC m=+121.823173890" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.383613 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.383738 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384099 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") pod \"9acc779e-6e10-4bc7-851f-c14ba843c057\" (UID: \"9acc779e-6e10-4bc7-851f-c14ba843c057\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.384777 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume" (OuterVolumeSpecName: "config-volume") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.385069 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9acc779e-6e10-4bc7-851f-c14ba843c057-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.385127 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.885111868 +0000 UTC m=+122.399569603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.403211 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9" (OuterVolumeSpecName: "kube-api-access-9xzk9") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "kube-api-access-9xzk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.417898 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9acc779e-6e10-4bc7-851f-c14ba843c057" (UID: "9acc779e-6e10-4bc7-851f-c14ba843c057"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487445 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487910 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9acc779e-6e10-4bc7-851f-c14ba843c057-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.487925 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xzk9\" (UniqueName: \"kubernetes.io/projected/9acc779e-6e10-4bc7-851f-c14ba843c057-kube-api-access-9xzk9\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.488023 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:38.988000385 +0000 UTC m=+122.502458110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.503118 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-qmtl4" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.589533 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.590017 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.089998387 +0000 UTC m=+122.604456122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.633152 5121 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-jp5zf container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.16:5443/healthz\": context deadline exceeded" start-of-body= Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.633235 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" podUID="1c0a3ab2-4ddb-4472-af47-3471a18714be" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.16:5443/healthz\": context deadline exceeded" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.690801 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.691118 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.191075706 +0000 UTC m=+122.705533441 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.691473 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.691922 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.191900206 +0000 UTC m=+122.706358101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4006df6e7fe1c2990ba899f6e2ee5473fe685dffbbdbde3cdb20d2bdc6284361"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749471 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"4c3413ea01dd4a6caa2300ed225d35ec63e10da7da2d72a3c8c15034cff4d9a0"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.749918 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"fc3bacf49d92746313d1f8cbebd9a26dab5972835b1ea8f54d4f6d893586b1da"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753055 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"bd49ad1e7370857b20e97cd3391712ddc59b000d62d54f58d2dc854e3300790b"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.753077 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mlvtl" event={"ID":"5b49811f-e44a-43e9-80e6-15fcc9ed145f","Type":"ContainerStarted","Data":"c3811a05fdbae324fa81ffcc6bd170ffa16b14a78aa2366d180b0bcf7b0afb23"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.761621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"1b672ddde30963505315a85cf20add041f74e112fb4cb73b91bfaf63f601b3d4"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.773213 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" event={"ID":"38e2fa84-50e3-4aa5-9269-6e423103dbe2","Type":"ContainerStarted","Data":"8b6b2ea5802f3eadd6eb8c3bfdb4d8e6f668bfee80c791e394d00f0da842cd27"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.775855 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-rsbpp" event={"ID":"b46e61bd-a38a-4792-98ee-067e427538c9","Type":"ContainerStarted","Data":"275a09ca35519ecfaf83ff68694973c0ea702d66df8ce2771a2d7345fe4c99e8"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.776235 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780167 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780151 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522880-b2sfp" event={"ID":"9acc779e-6e10-4bc7-851f-c14ba843c057","Type":"ContainerDied","Data":"ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.780307 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccd14b793fa7267457270dd5edb3780dfbfcaa008da568cab70808feab32579e" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.790613 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"e75c9c7b1ce69c598f266da7896fbce34f8efad43cc5c7d70a6aec71cd142532"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.790728 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"f33322cd45cb8003bee7c99557ce59ac78866179f84aa6084b18dc68d7cc7b19"} Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.791614 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" gracePeriod=30 Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792227 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792276 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.792334 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.792519 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.292503673 +0000 UTC m=+122.806961408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.799672 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.803006 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-jp5zf" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.804043 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-wwrwg" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.811215 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-p8ssx" podStartSLOduration=99.811184071 podStartE2EDuration="1m39.811184071s" podCreationTimestamp="2026-02-18 00:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.799966298 +0000 UTC m=+122.314424033" watchObservedRunningTime="2026-02-18 00:10:38.811184071 +0000 UTC m=+122.325641806" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.820996 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-htdrd" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.829372 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mlvtl" podStartSLOduration=100.829351905 podStartE2EDuration="1m40.829351905s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.827146937 +0000 UTC m=+122.341604672" watchObservedRunningTime="2026-02-18 00:10:38.829351905 +0000 UTC m=+122.343809640" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.895209 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.901833 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.401809756 +0000 UTC m=+122.916267491 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.908045 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-rsbpp" podStartSLOduration=9.908021759 podStartE2EDuration="9.908021759s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:38.868346903 +0000 UTC m=+122.382804638" watchObservedRunningTime="2026-02-18 00:10:38.908021759 +0000 UTC m=+122.422479504" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.920417 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49388: no serving certificate available for the kubelet" Feb 18 00:10:38 crc kubenswrapper[5121]: I0218 00:10:38.998330 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:38 crc kubenswrapper[5121]: E0218 00:10:38.998872 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.49885408 +0000 UTC m=+123.013311815 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.103121 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.103654 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.603610424 +0000 UTC m=+123.118068159 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.174902 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:39 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:39 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:39 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.175004 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.206975 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.207601 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.707572848 +0000 UTC m=+123.222030583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.311691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.312134 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.812107166 +0000 UTC m=+123.326564891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.412751 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.413035 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:39.912991149 +0000 UTC m=+123.427448884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.479987 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480630 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480666 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.480767 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="9acc779e-6e10-4bc7-851f-c14ba843c057" containerName="collect-profiles" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.500904 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.501121 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.506538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.530941 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.531393 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.0313693 +0000 UTC m=+123.545827035 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.632374 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.632599 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.132551961 +0000 UTC m=+123.647009696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633069 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633253 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633379 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.633436 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.633816 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.133807603 +0000 UTC m=+123.648265338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.667157 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.680339 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.683112 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.695755 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.734926 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.735235 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.23518697 +0000 UTC m=+123.749644705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735548 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735781 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.735952 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.736152 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.236136864 +0000 UTC m=+123.750594589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.736875 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.737016 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.766197 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"community-operators-ttn8q\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.803957 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-zvwwb" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837483 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837668 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837702 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.837751 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.837922 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.337897571 +0000 UTC m=+123.852355306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.850873 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.855223 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.871192 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.904491 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939103 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939166 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939314 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.939385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.941002 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.941227 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: E0218 00:10:39.942560 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.442542023 +0000 UTC m=+123.956999958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.977448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"certified-operators-6rdts\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:39 crc kubenswrapper[5121]: I0218 00:10:39.995549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.041948 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045094 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045132 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.045175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.045361 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.545338416 +0000 UTC m=+124.059796141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.072918 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.159845 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160551 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160658 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.160746 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.161351 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.161415 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.161506 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.661485717 +0000 UTC m=+124.175943452 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.170822 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:40 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:40 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:40 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.170906 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.187811 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"community-operators-xlq58\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.240028 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.263672 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.264047 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.764012553 +0000 UTC m=+124.278470288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.273969 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49400: no serving certificate available for the kubelet" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.366406 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.366995 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.866970972 +0000 UTC m=+124.381428707 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.468342 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.468731 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:40.968690376 +0000 UTC m=+124.483148111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.475979 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476055 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476079 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476219 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.476837 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.485153 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.485883 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-422hn" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.497956 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:10:40 crc kubenswrapper[5121]: W0218 00:10:40.508747 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6854ad9b_1632_47d4_82bc_bdd90768bc2a.slice/crio-0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd WatchSource:0}: Error finding container 0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd: Status 404 returned error can't find the container with id 0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.570964 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571088 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.571211 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.576357 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.076312575 +0000 UTC m=+124.590770310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673132 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.673215 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.173196535 +0000 UTC m=+124.687654270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673437 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.673741 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.173734179 +0000 UTC m=+124.688191914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673889 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673936 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.673956 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.674340 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.674542 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.715074 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"certified-operators-czgg8\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.787040 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.787839 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.287820886 +0000 UTC m=+124.802278621 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.799698 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.874211 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"b7ed7dc670ad2dcb9f8640d5f44b830e13e4f0554ae87aa8ba2653124a6f77c7"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.889187 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.889687 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.389668815 +0000 UTC m=+124.904126550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.901209 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"ba5ce9e402f3d620de01810fe1d74320f085bb14c8d9080e50266330e385fdc0"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.922734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerStarted","Data":"0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd"} Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.982162 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:10:40 crc kubenswrapper[5121]: I0218 00:10:40.990955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:40 crc kubenswrapper[5121]: E0218 00:10:40.991292 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.491267048 +0000 UTC m=+125.005724783 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.094018 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.094402 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.594382919 +0000 UTC m=+125.108840804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.171888 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:41 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:41 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:41 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.172372 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.195219 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.195674 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.695622582 +0000 UTC m=+125.210080337 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.280382 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.297939 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.298436 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.798414345 +0000 UTC m=+125.312872090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.399852 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.400797 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:41.900764066 +0000 UTC m=+125.415221801 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.458326 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.478929 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.479302 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.495683 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.502197 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.502532 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.002517283 +0000 UTC m=+125.516975018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.595145 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604145 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604287 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604314 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.604366 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.10431368 +0000 UTC m=+125.618771415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.604831 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.605068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.605668 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.105626314 +0000 UTC m=+125.620084049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.607925 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.609961 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.613981 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.621713 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706283 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.706417 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.206399734 +0000 UTC m=+125.720857469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706578 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706642 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706700 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706715 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.706752 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.707164 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.707504 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.207480122 +0000 UTC m=+125.721937857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.707520 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.726676 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"redhat-marketplace-q4gm2\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.808218 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.808594 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.308554911 +0000 UTC m=+125.823012656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.808905 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809074 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809106 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.809222 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.809459 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.309436034 +0000 UTC m=+125.823893959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.811893 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.834666 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.852929 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.869305 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.870486 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.913830 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:41 crc kubenswrapper[5121]: E0218 00:10:41.914159 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.414142087 +0000 UTC m=+125.928599822 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.921400 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.973897 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.974045 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.984671 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.984770 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994243 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" exitCode=0 Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994452 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54"} Feb 18 00:10:41 crc kubenswrapper[5121]: I0218 00:10:41.994495 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"68089a9179b2ee54313136fab6546d018047ab31029619dfc6933c6ec3ac176c"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015065 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" exitCode=0 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015327 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.015368 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerStarted","Data":"e3aa645abbf5b996b104f5c41a2f1ccc97cd615ef2eb0ff0e26a4d5ea630790e"} Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.023592 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.023870 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.024072 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.024155 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.026152 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.526130811 +0000 UTC m=+126.040588546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.041676 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.067835 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.068055 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.079609 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.079913 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125343 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125508 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125560 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125624 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125673 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.125700 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.126147 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.126234 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.626211972 +0000 UTC m=+126.140669707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.126446 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.158882 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"redhat-marketplace-fp6mh\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.171726 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.190184 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:42 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:42 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:42 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.190254 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227817 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.227839 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.228428 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.229141 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.729129069 +0000 UTC m=+126.243586804 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.238882 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.263795 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.329524 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.329811 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.829784547 +0000 UTC m=+126.344242282 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.329878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.330859 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.830848724 +0000 UTC m=+126.345306459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.340746 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.340802 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.365166 5121 patch_prober.go:28] interesting pod/console-64d44f6ddf-7b8sg container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.365334 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-7b8sg" podUID="dbdd0c4c-8844-44cd-885a-c2b40db8dcb4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.378269 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.378357 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.432074 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.432718 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.433975 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:42.933950286 +0000 UTC m=+126.448408021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.453499 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.483767 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod787ee824_3e40_4929_9eda_a58528843d28.slice/crio-214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e WatchSource:0}: Error finding container 214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e: Status 404 returned error can't find the container with id 214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.524833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.538943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.539300 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.039285005 +0000 UTC m=+126.553742740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.543509 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod60adf0de_2267_4a37_abc8_6b97aec2d3bd.slice/crio-183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017 WatchSource:0}: Error finding container 183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017: Status 404 returned error can't find the container with id 183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.646576 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.646876 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.146858853 +0000 UTC m=+126.661316588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.720276 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.749347 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.749962 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.249937294 +0000 UTC m=+126.764395039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: W0218 00:10:42.753124 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e0ed157_f5bd_43a5_b641_bfa4e8df62ff.slice/crio-003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34 WatchSource:0}: Error finding container 003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34: Status 404 returned error can't find the container with id 003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34 Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.770258 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.850566 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.850810 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.350792307 +0000 UTC m=+126.865250042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.871243 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.887880 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.888055 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.893176 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.893243 5121 ???:1] "http: TLS handshake error from 192.168.126.11:49414: no serving certificate available for the kubelet" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.976772 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977019 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977158 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: I0218 00:10:42.977230 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:42 crc kubenswrapper[5121]: E0218 00:10:42.977724 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.477704739 +0000 UTC m=+126.992162484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.060980 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerStarted","Data":"003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.077275 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerStarted","Data":"183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.077947 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078076 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078116 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078155 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078733 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.078787 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.078882 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.5788635 +0000 UTC m=+127.093321235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084719 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" exitCode=0 Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084867 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.084946 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerStarted","Data":"214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.092002 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerStarted","Data":"27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd"} Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.145359 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"redhat-operators-pvff2\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.173724 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:43 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:43 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:43 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.173789 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.179826 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.180793 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.68077917 +0000 UTC m=+127.195236905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.236367 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.250154 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.283466 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.283724 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.783705647 +0000 UTC m=+127.298163382 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.314177 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.344330 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384794 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384846 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384923 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.384968 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.385262 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.885248818 +0000 UTC m=+127.399706553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486334 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486743 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486809 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.486858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.487458 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:43.987431875 +0000 UTC m=+127.501889610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.487544 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.487960 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.533085 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"redhat-operators-6rwlx\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.588028 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.588396 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.08838321 +0000 UTC m=+127.602840945 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.661860 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.691583 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.692029 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.192003595 +0000 UTC m=+127.706461330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.797009 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.797594 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.29756219 +0000 UTC m=+127.812019925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.840152 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.897977 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:43 crc kubenswrapper[5121]: E0218 00:10:43.898746 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.398728911 +0000 UTC m=+127.913186646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:43 crc kubenswrapper[5121]: I0218 00:10:43.944800 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.016387 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.017075 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.51706047 +0000 UTC m=+128.031518205 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.110212 5121 generic.go:358] "Generic (PLEG): container finished" podID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerID="c4d93596cf85a366d9c65b18cbb57b1a0ef35f70632d1c0e63459b646c98d329" exitCode=0 Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.110363 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerDied","Data":"c4d93596cf85a366d9c65b18cbb57b1a0ef35f70632d1c0e63459b646c98d329"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.115550 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerStarted","Data":"56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.117787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.119293 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.619264828 +0000 UTC m=+128.133722573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.127580 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"2acd9157a5c0303ad67f67ca0941df951cb9a99c9745a061c1e6e8e477768d5b"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.131909 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"d90fd19bec269295dcd896d5064cd72d8b3eeb6792e85da08c508892c9638ff0"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.138245 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643" exitCode=0 Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.138360 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643"} Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.173912 5121 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mvs4c container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 18 00:10:44 crc kubenswrapper[5121]: [-]has-synced failed: reason withheld Feb 18 00:10:44 crc kubenswrapper[5121]: [+]process-running ok Feb 18 00:10:44 crc kubenswrapper[5121]: healthz check failed Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.174037 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" podUID="8724461b-b94b-4f4a-9c9f-4a131b9e02c2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.221306 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.222442 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.72240896 +0000 UTC m=+128.236866705 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.323311 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.323702 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.823678894 +0000 UTC m=+128.338136639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.425300 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.425699 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:44.925682776 +0000 UTC m=+128.440140511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.527063 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.527784 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.027767381 +0000 UTC m=+128.542225116 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.628714 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.631068 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.130977074 +0000 UTC m=+128.645434849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.695045 5121 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.730068 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.730494 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.230477472 +0000 UTC m=+128.744935207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.832365 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.832937 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.332915686 +0000 UTC m=+128.847373431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.933464 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.933754 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.433729497 +0000 UTC m=+128.948187242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:44 crc kubenswrapper[5121]: I0218 00:10:44.934075 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:44 crc kubenswrapper[5121]: E0218 00:10:44.934634 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.43460994 +0000 UTC m=+128.949067685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.035421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.035545 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.535523194 +0000 UTC m=+129.049980939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.035813 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.036122 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.53611251 +0000 UTC m=+129.050570245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.137416 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.137706 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.6376365 +0000 UTC m=+129.152094255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.138369 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.138971 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.638945324 +0000 UTC m=+129.153403059 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.146328 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.146518 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152693 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"692bc3c6a0c9f154af5247c146d77f1e40ea74f3a51c3785fd56973022f501b1"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152744 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"31b268e598ebe0e3aa1422ac68e2eaa20287c62b44f3e830ae2d03dc9801d804"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.152756 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" event={"ID":"a41b6648-bba2-4f34-b49b-f95db5ff9426","Type":"ContainerStarted","Data":"6afbaa90e25ddf368cb989cdc901e02c04abb2e2540a06a1dfeea5d5df7c10e9"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.155561 5121 generic.go:358] "Generic (PLEG): container finished" podID="194e426f-840b-4660-a161-f7a65ea58876" containerID="56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.155793 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerDied","Data":"56c8a8963e5b040d645fb93297685a7c23c9aed3a624ad9d598fe6b751444411"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.159359 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c" exitCode=0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.159612 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c"} Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.171133 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.177470 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-mvs4c" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.195459 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-v9jcr" podStartSLOduration=16.195423068 podStartE2EDuration="16.195423068s" podCreationTimestamp="2026-02-18 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:45.192003609 +0000 UTC m=+128.706461384" watchObservedRunningTime="2026-02-18 00:10:45.195423068 +0000 UTC m=+128.709880803" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.242077 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.242313 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.742268662 +0000 UTC m=+129.256726407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.243338 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.244835 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.744808848 +0000 UTC m=+129.259266763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.344269 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.344928 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.84490316 +0000 UTC m=+129.359360905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.445748 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.445677 5121 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-18T00:10:44.695101879Z","UUID":"4f5d7681-a594-439d-adc5-2dd55e131103","Handler":null,"Name":"","Endpoint":""} Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.446148 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-18 00:10:45.946133832 +0000 UTC m=+129.460591567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-8g5jp" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.458986 5121 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.459046 5121 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.480981 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.532860 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547181 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") pod \"194e426f-840b-4660-a161-f7a65ea58876\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547330 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "194e426f-840b-4660-a161-f7a65ea58876" (UID: "194e426f-840b-4660-a161-f7a65ea58876"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547429 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.547469 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") pod \"194e426f-840b-4660-a161-f7a65ea58876\" (UID: \"194e426f-840b-4660-a161-f7a65ea58876\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.548473 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/194e426f-840b-4660-a161-f7a65ea58876-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.553711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.556879 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "194e426f-840b-4660-a161-f7a65ea58876" (UID: "194e426f-840b-4660-a161-f7a65ea58876"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.609408 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.613131 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.622744 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:45 crc kubenswrapper[5121]: E0218 00:10:45.622861 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649556 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") pod \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649697 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") pod \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\" (UID: \"60adf0de-2267-4a37-abc8-6b97aec2d3bd\") " Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649871 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649862 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "60adf0de-2267-4a37-abc8-6b97aec2d3bd" (UID: "60adf0de-2267-4a37-abc8-6b97aec2d3bd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.649932 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/194e426f-840b-4660-a161-f7a65ea58876-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.654541 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "60adf0de-2267-4a37-abc8-6b97aec2d3bd" (UID: "60adf0de-2267-4a37-abc8-6b97aec2d3bd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.655564 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.655638 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.694535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-8g5jp\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.751322 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.751353 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/60adf0de-2267-4a37-abc8-6b97aec2d3bd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.829959 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:10:45 crc kubenswrapper[5121]: I0218 00:10:45.838184 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.118660 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.174952 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"60adf0de-2267-4a37-abc8-6b97aec2d3bd","Type":"ContainerDied","Data":"183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017"} Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.174999 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="183fe466e445d23b3ee18a1f78ff4247daaa478db49f1470af1755d876e6a017" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.175041 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181058 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"194e426f-840b-4660-a161-f7a65ea58876","Type":"ContainerDied","Data":"27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd"} Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181079 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27ba6061488400cbbe9425311565331cd7deab39daabc93870a7db8265dd0abd" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.181178 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 18 00:10:46 crc kubenswrapper[5121]: I0218 00:10:46.183001 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerStarted","Data":"51cf34af5f3e60547305a8dcaaf837202c7932c821c7bc1d4c4374385f24b01a"} Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.194607 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerStarted","Data":"3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521"} Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.194912 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.215562 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" podStartSLOduration=109.215522909 podStartE2EDuration="1m49.215522909s" podCreationTimestamp="2026-02-18 00:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:10:47.213099856 +0000 UTC m=+130.727557601" watchObservedRunningTime="2026-02-18 00:10:47.215522909 +0000 UTC m=+130.729980644" Feb 18 00:10:47 crc kubenswrapper[5121]: I0218 00:10:47.280440 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.040736 5121 ???:1] "http: TLS handshake error from 192.168.126.11:35614: no serving certificate available for the kubelet" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.270415 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.793807 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.794253 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:48 crc kubenswrapper[5121]: I0218 00:10:48.808817 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-rsbpp" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.344561 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.354704 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-7b8sg" Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.378625 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:52 crc kubenswrapper[5121]: I0218 00:10:52.378752 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:10:54 crc kubenswrapper[5121]: I0218 00:10:54.856620 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.614218 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.616883 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.621339 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:10:55 crc kubenswrapper[5121]: E0218 00:10:55.621418 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.313932 5121 ???:1] "http: TLS handshake error from 192.168.126.11:51194: no serving certificate available for the kubelet" Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.792424 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:10:58 crc kubenswrapper[5121]: I0218 00:10:58.792568 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.378923 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.379362 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.379446 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380437 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380533 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.380993 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} pod="openshift-console/downloads-747b44746d-mkw5h" containerMessage="Container download-server failed liveness probe, will be restarted" Feb 18 00:11:02 crc kubenswrapper[5121]: I0218 00:11:02.381127 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" containerID="cri-o://f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7" gracePeriod=2 Feb 18 00:11:03 crc kubenswrapper[5121]: I0218 00:11:03.339230 5121 generic.go:358] "Generic (PLEG): container finished" podID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerID="f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7" exitCode=0 Feb 18 00:11:03 crc kubenswrapper[5121]: I0218 00:11:03.339338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerDied","Data":"f87d9dee0a7243acd74bc883d01fb4b439b5fd674097ae6c5983119f05d979f7"} Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.611046 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.612839 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.614509 5121 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 18 00:11:05 crc kubenswrapper[5121]: E0218 00:11:05.614557 5121 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 18 00:11:08 crc kubenswrapper[5121]: I0218 00:11:08.237137 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:11:08 crc kubenswrapper[5121]: I0218 00:11:08.801232 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-zsz4p" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379621 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379684 5121 generic.go:358] "Generic (PLEG): container finished" podID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" exitCode=137 Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.379738 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerDied","Data":"1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82"} Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.787237 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.787343 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.806417 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.857681 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858735 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858822 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858676 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready" (OuterVolumeSpecName: "ready") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858898 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.858935 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") pod \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\" (UID: \"9b4e56ad-da89-4541-842d-17ba2d9bcb0a\") " Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.860276 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861348 5121 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-ready\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861388 5121 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.861403 5121 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.872600 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26" (OuterVolumeSpecName: "kube-api-access-zkf26") pod "9b4e56ad-da89-4541-842d-17ba2d9bcb0a" (UID: "9b4e56ad-da89-4541-842d-17ba2d9bcb0a"). InnerVolumeSpecName "kube-api-access-zkf26". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:09 crc kubenswrapper[5121]: I0218 00:11:09.962636 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zkf26\" (UniqueName: \"kubernetes.io/projected/9b4e56ad-da89-4541-842d-17ba2d9bcb0a-kube-api-access-zkf26\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.386673 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.389707 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.391983 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-mkw5h" event={"ID":"6d918a65-a99e-41a8-97de-51c2cc74b24b","Type":"ContainerStarted","Data":"eb9268643d3ff2db1eac72e807eac3f882e46944a235cf46adeedecbbbce82b9"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.392617 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.394276 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.394378 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.395120 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.395171 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.399365 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.401496 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.404084 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.404184 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.407537 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.415271 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418342 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-jc5sl_9b4e56ad-da89-4541-842d-17ba2d9bcb0a/kube-multus-additional-cni-plugins/0.log" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418543 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" event={"ID":"9b4e56ad-da89-4541-842d-17ba2d9bcb0a","Type":"ContainerDied","Data":"ce83ab25e1e8e9f955af7b1409e400ceb125028d31573c59d7119d8ace62ac10"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418594 5121 scope.go:117] "RemoveContainer" containerID="1415b1292d0ac6b9b8fd3ea55961b6607178c87fd37c985c60049aa35c81fc82" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.418798 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-jc5sl" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.427599 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.428219 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.428345 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.440535 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" exitCode=0 Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.440868 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5"} Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.613884 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=66.613867974 podStartE2EDuration="1m6.613867974s" podCreationTimestamp="2026-02-18 00:10:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:10.611242286 +0000 UTC m=+154.125700011" watchObservedRunningTime="2026-02-18 00:11:10.613867974 +0000 UTC m=+154.128325709" Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.682244 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:11:10 crc kubenswrapper[5121]: I0218 00:11:10.693371 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-jc5sl"] Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.280729 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" path="/var/lib/kubelet/pods/9b4e56ad-da89-4541-842d-17ba2d9bcb0a/volumes" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.466909 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.467074 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.473696 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerStarted","Data":"3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.478924 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerStarted","Data":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.481901 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerStarted","Data":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.483763 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.483845 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.486398 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.486533 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.494621 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerStarted","Data":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.500270 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7" exitCode=0 Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.500335 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7"} Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.501924 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.501973 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.513594 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q4gm2" podStartSLOduration=3.844960007 podStartE2EDuration="30.513566111s" podCreationTimestamp="2026-02-18 00:10:41 +0000 UTC" firstStartedPulling="2026-02-18 00:10:43.08573329 +0000 UTC m=+126.600191025" lastFinishedPulling="2026-02-18 00:11:09.754339394 +0000 UTC m=+153.268797129" observedRunningTime="2026-02-18 00:11:11.508981782 +0000 UTC m=+155.023439557" watchObservedRunningTime="2026-02-18 00:11:11.513566111 +0000 UTC m=+155.028023916" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.548626 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttn8q" podStartSLOduration=4.770357655 podStartE2EDuration="32.548604735s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.974990265 +0000 UTC m=+125.489448000" lastFinishedPulling="2026-02-18 00:11:09.753237345 +0000 UTC m=+153.267695080" observedRunningTime="2026-02-18 00:11:11.547496076 +0000 UTC m=+155.061953821" watchObservedRunningTime="2026-02-18 00:11:11.548604735 +0000 UTC m=+155.063062480" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.573230 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fp6mh" podStartSLOduration=4.905780738 podStartE2EDuration="30.573202347s" podCreationTimestamp="2026-02-18 00:10:41 +0000 UTC" firstStartedPulling="2026-02-18 00:10:44.141455207 +0000 UTC m=+127.655912942" lastFinishedPulling="2026-02-18 00:11:09.808876806 +0000 UTC m=+153.323334551" observedRunningTime="2026-02-18 00:11:11.568907395 +0000 UTC m=+155.083365150" watchObservedRunningTime="2026-02-18 00:11:11.573202347 +0000 UTC m=+155.087660112" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.604971 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-czgg8" podStartSLOduration=3.828734429 podStartE2EDuration="31.604945214s" podCreationTimestamp="2026-02-18 00:10:40 +0000 UTC" firstStartedPulling="2026-02-18 00:10:42.01653424 +0000 UTC m=+125.530991975" lastFinishedPulling="2026-02-18 00:11:09.792745025 +0000 UTC m=+153.307202760" observedRunningTime="2026-02-18 00:11:11.602973833 +0000 UTC m=+155.117431578" watchObservedRunningTime="2026-02-18 00:11:11.604945214 +0000 UTC m=+155.119402949" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.812778 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:11 crc kubenswrapper[5121]: I0218 00:11:11.813035 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.239094 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.239627 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.378237 5121 patch_prober.go:28] interesting pod/downloads-747b44746d-mkw5h container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.378344 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-mkw5h" podUID="6d918a65-a99e-41a8-97de-51c2cc74b24b" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.36:8080/\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.509199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerStarted","Data":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.511655 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerStarted","Data":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.513791 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerStarted","Data":"2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.516754 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerStarted","Data":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.531999 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rdts" podStartSLOduration=5.709944024 podStartE2EDuration="33.531977434s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.98587316 +0000 UTC m=+125.500330895" lastFinishedPulling="2026-02-18 00:11:09.80790657 +0000 UTC m=+153.322364305" observedRunningTime="2026-02-18 00:11:12.529062488 +0000 UTC m=+156.043520233" watchObservedRunningTime="2026-02-18 00:11:12.531977434 +0000 UTC m=+156.046435169" Feb 18 00:11:12 crc kubenswrapper[5121]: I0218 00:11:12.554016 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6rwlx" podStartSLOduration=4.891257335 podStartE2EDuration="29.553994548s" podCreationTimestamp="2026-02-18 00:10:43 +0000 UTC" firstStartedPulling="2026-02-18 00:10:45.147701533 +0000 UTC m=+128.662159268" lastFinishedPulling="2026-02-18 00:11:09.810438746 +0000 UTC m=+153.324896481" observedRunningTime="2026-02-18 00:11:12.551555364 +0000 UTC m=+156.066013099" watchObservedRunningTime="2026-02-18 00:11:12.553994548 +0000 UTC m=+156.068452293" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.348581 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-q4gm2" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:13 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:13 crc kubenswrapper[5121]: > Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.353328 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fp6mh" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:13 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:13 crc kubenswrapper[5121]: > Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.559094 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlq58" podStartSLOduration=6.759240841 podStartE2EDuration="34.559076793s" podCreationTimestamp="2026-02-18 00:10:39 +0000 UTC" firstStartedPulling="2026-02-18 00:10:41.99547972 +0000 UTC m=+125.509937455" lastFinishedPulling="2026-02-18 00:11:09.795315672 +0000 UTC m=+153.309773407" observedRunningTime="2026-02-18 00:11:13.552584134 +0000 UTC m=+157.067041879" watchObservedRunningTime="2026-02-18 00:11:13.559076793 +0000 UTC m=+157.073534538" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.580171 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pvff2" podStartSLOduration=6.947545745 podStartE2EDuration="31.580152423s" podCreationTimestamp="2026-02-18 00:10:42 +0000 UTC" firstStartedPulling="2026-02-18 00:10:45.160720843 +0000 UTC m=+128.675178578" lastFinishedPulling="2026-02-18 00:11:09.793327521 +0000 UTC m=+153.307785256" observedRunningTime="2026-02-18 00:11:13.575995915 +0000 UTC m=+157.090453650" watchObservedRunningTime="2026-02-18 00:11:13.580152423 +0000 UTC m=+157.094610168" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.662174 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:13 crc kubenswrapper[5121]: I0218 00:11:13.662257 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:14 crc kubenswrapper[5121]: I0218 00:11:14.706190 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6rwlx" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" probeResult="failure" output=< Feb 18 00:11:14 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:11:14 crc kubenswrapper[5121]: > Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.819147 5121 ???:1] "http: TLS handshake error from 192.168.126.11:43990: no serving certificate available for the kubelet" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.832784 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833678 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833706 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833726 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833734 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833778 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833788 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833903 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="60adf0de-2267-4a37-abc8-6b97aec2d3bd" containerName="pruner" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833919 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="9b4e56ad-da89-4541-842d-17ba2d9bcb0a" containerName="kube-multus-additional-cni-plugins" Feb 18 00:11:18 crc kubenswrapper[5121]: I0218 00:11:18.833929 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="194e426f-840b-4660-a161-f7a65ea58876" containerName="pruner" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.113872 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.114087 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.116980 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.120858 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.223799 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.224213 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326539 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326695 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.326742 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.362725 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.435093 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.851536 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.853443 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.952786 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:19 crc kubenswrapper[5121]: I0218 00:11:19.983838 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 18 00:11:19 crc kubenswrapper[5121]: W0218 00:11:19.987296 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podeddfcdef_6299_4eae_b4a2_6a5d3b5f41be.slice/crio-4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485 WatchSource:0}: Error finding container 4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485: Status 404 returned error can't find the container with id 4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485 Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.001187 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.001254 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.068929 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.240539 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.240771 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.323195 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.323540 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.566288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerStarted","Data":"4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485"} Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.602268 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.611241 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.799894 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.800191 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:20 crc kubenswrapper[5121]: I0218 00:11:20.869241 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.523889 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-mkw5h" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.575092 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerStarted","Data":"be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a"} Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.602464 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=3.602445086 podStartE2EDuration="3.602445086s" podCreationTimestamp="2026-02-18 00:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:21.600134006 +0000 UTC m=+165.114591751" watchObservedRunningTime="2026-02-18 00:11:21.602445086 +0000 UTC m=+165.116902851" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.620033 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.634538 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.848605 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:21 crc kubenswrapper[5121]: I0218 00:11:21.894942 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.014710 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.279151 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.318826 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.424720 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.431959 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.443184 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.482323 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.482607 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.483017 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.582510 5121 generic.go:358] "Generic (PLEG): container finished" podID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerID="be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a" exitCode=0 Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.582618 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerDied","Data":"be815f92fd88b62ca89eeb66f9a84a2a9ed332d299a7b6fb657ba21e9566640a"} Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584431 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584488 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584506 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584559 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.584726 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.601563 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.610675 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"installer-12-crc\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.750567 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:11:22 crc kubenswrapper[5121]: I0218 00:11:22.950256 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.237570 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.237854 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.294868 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591608 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerStarted","Data":"4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207"} Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591910 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerStarted","Data":"e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db"} Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.591937 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlq58" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" containerID="cri-o://b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" gracePeriod=2 Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.592864 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-czgg8" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" containerID="cri-o://5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" gracePeriod=2 Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.649483 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.685250 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.68522448 podStartE2EDuration="1.68522448s" podCreationTimestamp="2026-02-18 00:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:11:23.631266673 +0000 UTC m=+167.145724458" watchObservedRunningTime="2026-02-18 00:11:23.68522448 +0000 UTC m=+167.199682205" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.730651 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.783022 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:23 crc kubenswrapper[5121]: I0218 00:11:23.890223 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") pod \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012538 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") pod \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\" (UID: \"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012145 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" (UID: "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.012890 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.015323 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.018809 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" (UID: "eddfcdef-6299-4eae-b4a2-6a5d3b5f41be"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.023666 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.113963 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114067 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114110 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114191 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114280 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") pod \"af92a560-a657-450c-b3ad-baa6233127aa\" (UID: \"af92a560-a657-450c-b3ad-baa6233127aa\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114322 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") pod \"93fd39e7-abb5-409e-8eed-e7757f484c00\" (UID: \"93fd39e7-abb5-409e-8eed-e7757f484c00\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.114560 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eddfcdef-6299-4eae-b4a2-6a5d3b5f41be-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.115223 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities" (OuterVolumeSpecName: "utilities") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.115315 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities" (OuterVolumeSpecName: "utilities") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.120522 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr" (OuterVolumeSpecName: "kube-api-access-xmbkr") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "kube-api-access-xmbkr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.127391 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k" (OuterVolumeSpecName: "kube-api-access-r489k") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "kube-api-access-r489k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.149919 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93fd39e7-abb5-409e-8eed-e7757f484c00" (UID: "93fd39e7-abb5-409e-8eed-e7757f484c00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.179087 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af92a560-a657-450c-b3ad-baa6233127aa" (UID: "af92a560-a657-450c-b3ad-baa6233127aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215877 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215926 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215946 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r489k\" (UniqueName: \"kubernetes.io/projected/93fd39e7-abb5-409e-8eed-e7757f484c00-kube-api-access-r489k\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215959 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93fd39e7-abb5-409e-8eed-e7757f484c00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215970 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af92a560-a657-450c-b3ad-baa6233127aa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.215982 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmbkr\" (UniqueName: \"kubernetes.io/projected/af92a560-a657-450c-b3ad-baa6233127aa-kube-api-access-xmbkr\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.403216 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.403912 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fp6mh" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" containerID="cri-o://3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" gracePeriod=2 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612562 5121 generic.go:358] "Generic (PLEG): container finished" podID="af92a560-a657-450c-b3ad-baa6233127aa" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612711 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612779 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlq58" event={"ID":"af92a560-a657-450c-b3ad-baa6233127aa","Type":"ContainerDied","Data":"68089a9179b2ee54313136fab6546d018047ab31029619dfc6933c6ec3ac176c"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612803 5121 scope.go:117] "RemoveContainer" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.612880 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlq58" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618645 5121 generic.go:358] "Generic (PLEG): container finished" podID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618888 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.618953 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-czgg8" event={"ID":"93fd39e7-abb5-409e-8eed-e7757f484c00","Type":"ContainerDied","Data":"e3aa645abbf5b996b104f5c41a2f1ccc97cd615ef2eb0ff0e26a4d5ea630790e"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.619131 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-czgg8" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.651427 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerID="3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" exitCode=0 Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.651497 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.656041 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.656264 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.658837 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"eddfcdef-6299-4eae-b4a2-6a5d3b5f41be","Type":"ContainerDied","Data":"4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485"} Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.658879 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e753f0186db9d64b3482d2a7f1fd95225571198e839a906226e11720051d485" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.669238 5121 scope.go:117] "RemoveContainer" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.671133 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlq58"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.680498 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.692469 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-czgg8"] Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.706876 5121 scope.go:117] "RemoveContainer" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.728980 5121 scope.go:117] "RemoveContainer" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.729692 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": container with ID starting with b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca not found: ID does not exist" containerID="b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.729824 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca"} err="failed to get container status \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": rpc error: code = NotFound desc = could not find container \"b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca\": container with ID starting with b689f35cfe562709fa523fd9e6e72473478e487a973064dae4884c4d7b8fb9ca not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.729972 5121 scope.go:117] "RemoveContainer" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.730401 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": container with ID starting with 8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717 not found: ID does not exist" containerID="8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730443 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717"} err="failed to get container status \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": rpc error: code = NotFound desc = could not find container \"8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717\": container with ID starting with 8f22220741f00a9ac33cd610e93a6647a715df3dc2a62e9d3fb5f945e589d717 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730477 5121 scope.go:117] "RemoveContainer" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.730864 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": container with ID starting with 08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54 not found: ID does not exist" containerID="08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730910 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54"} err="failed to get container status \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": rpc error: code = NotFound desc = could not find container \"08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54\": container with ID starting with 08c6cceeb37b0733413185a5509391f0b61c2ed48962a18ee0f2321f088f8f54 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.730942 5121 scope.go:117] "RemoveContainer" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.773859 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.777018 5121 scope.go:117] "RemoveContainer" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.804481 5121 scope.go:117] "RemoveContainer" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.834924 5121 scope.go:117] "RemoveContainer" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.835326 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": container with ID starting with 5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26 not found: ID does not exist" containerID="5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835377 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835435 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835493 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") pod \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\" (UID: \"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff\") " Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835372 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26"} err="failed to get container status \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": rpc error: code = NotFound desc = could not find container \"5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26\": container with ID starting with 5d139346ec08d023227f619450994a5602d9ec47d922ff061e42f7592838bb26 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.835582 5121 scope.go:117] "RemoveContainer" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836608 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities" (OuterVolumeSpecName: "utilities") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.836724 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": container with ID starting with ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844 not found: ID does not exist" containerID="ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836752 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844"} err="failed to get container status \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": rpc error: code = NotFound desc = could not find container \"ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844\": container with ID starting with ceaefa350fab9f894fd9f7775700623b418d1682f10a2a972a80b9ead5380844 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.836772 5121 scope.go:117] "RemoveContainer" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: E0218 00:11:24.837005 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": container with ID starting with c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48 not found: ID does not exist" containerID="c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.837022 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48"} err="failed to get container status \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": rpc error: code = NotFound desc = could not find container \"c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48\": container with ID starting with c8bf2a4734f47806796c43aaa55915ef3344d4bf9f5ab9725caa719e048d1c48 not found: ID does not exist" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.845349 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8" (OuterVolumeSpecName: "kube-api-access-w89r8") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "kube-api-access-w89r8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.851477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" (UID: "0e0ed157-f5bd-43a5-b641-bfa4e8df62ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936888 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936936 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:24 crc kubenswrapper[5121]: I0218 00:11:24.936947 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w89r8\" (UniqueName: \"kubernetes.io/projected/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff-kube-api-access-w89r8\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.279872 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" path="/var/lib/kubelet/pods/93fd39e7-abb5-409e-8eed-e7757f484c00/volumes" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.281080 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af92a560-a657-450c-b3ad-baa6233127aa" path="/var/lib/kubelet/pods/af92a560-a657-450c-b3ad-baa6233127aa/volumes" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666265 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fp6mh" event={"ID":"0e0ed157-f5bd-43a5-b641-bfa4e8df62ff","Type":"ContainerDied","Data":"003adf70dc3e5017b440f8cec52de82239033b7ae82b5a5e4179a95616dd6f34"} Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666320 5121 scope.go:117] "RemoveContainer" containerID="3a28585b97eae8553d15aa6112a7e17af9d47563f34be9467069e11cafd7ee11" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.666457 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fp6mh" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.686429 5121 scope.go:117] "RemoveContainer" containerID="2bdec3bd513a3c658e9ca8badc9950ba33045d33e3d17857b745d9f73b431c61" Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.687106 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.690671 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fp6mh"] Feb 18 00:11:25 crc kubenswrapper[5121]: I0218 00:11:25.702482 5121 scope.go:117] "RemoveContainer" containerID="c8b0a21164d8ece6155198a8b8edd86920256bb3faa893f125478334fe3d3643" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.002323 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.003481 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6rwlx" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" containerID="cri-o://a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" gracePeriod=2 Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.279002 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" path="/var/lib/kubelet/pods/0e0ed157-f5bd-43a5-b641-bfa4e8df62ff/volumes" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.370422 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472401 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472485 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.472813 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") pod \"d5917f75-6117-4adb-a85e-6d40a331ef66\" (UID: \"d5917f75-6117-4adb-a85e-6d40a331ef66\") " Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.473863 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities" (OuterVolumeSpecName: "utilities") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.487979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw" (OuterVolumeSpecName: "kube-api-access-vkqfw") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "kube-api-access-vkqfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.574880 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.574918 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vkqfw\" (UniqueName: \"kubernetes.io/projected/d5917f75-6117-4adb-a85e-6d40a331ef66-kube-api-access-vkqfw\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.580745 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5917f75-6117-4adb-a85e-6d40a331ef66" (UID: "d5917f75-6117-4adb-a85e-6d40a331ef66"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.676358 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5917f75-6117-4adb-a85e-6d40a331ef66-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697764 5121 generic.go:358] "Generic (PLEG): container finished" podID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" exitCode=0 Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697868 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6rwlx" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697862 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697979 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6rwlx" event={"ID":"d5917f75-6117-4adb-a85e-6d40a331ef66","Type":"ContainerDied","Data":"d90fd19bec269295dcd896d5064cd72d8b3eeb6792e85da08c508892c9638ff0"} Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.697999 5121 scope.go:117] "RemoveContainer" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.716941 5121 scope.go:117] "RemoveContainer" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.735158 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.739100 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6rwlx"] Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.758549 5121 scope.go:117] "RemoveContainer" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775169 5121 scope.go:117] "RemoveContainer" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.775633 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": container with ID starting with a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c not found: ID does not exist" containerID="a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775729 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c"} err="failed to get container status \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": rpc error: code = NotFound desc = could not find container \"a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c\": container with ID starting with a2b493a4451b39bf9a933a2aca3de9bcb268265ac5c3d4d609b2252502a9502c not found: ID does not exist" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.775758 5121 scope.go:117] "RemoveContainer" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.776023 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": container with ID starting with 0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935 not found: ID does not exist" containerID="0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776099 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935"} err="failed to get container status \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": rpc error: code = NotFound desc = could not find container \"0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935\": container with ID starting with 0f4c7038b8c8b485d13b5367dafa39452a2a251dbe40bc2f0eeeaf7fd534b935 not found: ID does not exist" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776127 5121 scope.go:117] "RemoveContainer" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: E0218 00:11:27.776364 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": container with ID starting with 780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494 not found: ID does not exist" containerID="780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494" Feb 18 00:11:27 crc kubenswrapper[5121]: I0218 00:11:27.776389 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494"} err="failed to get container status \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": rpc error: code = NotFound desc = could not find container \"780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494\": container with ID starting with 780cbb3d38c11430531d7864ac5449608ea2345d9e16894693aabbd01b694494 not found: ID does not exist" Feb 18 00:11:29 crc kubenswrapper[5121]: I0218 00:11:29.279742 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" path="/var/lib/kubelet/pods/d5917f75-6117-4adb-a85e-6d40a331ef66/volumes" Feb 18 00:11:49 crc kubenswrapper[5121]: I0218 00:11:49.414675 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:11:59 crc kubenswrapper[5121]: I0218 00:11:59.805997 5121 ???:1] "http: TLS handshake error from 192.168.126.11:37704: no serving certificate available for the kubelet" Feb 18 00:12:01 crc kubenswrapper[5121]: E0218 00:12:01.375231 5121 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.377576 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379357 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379426 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379453 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379465 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379515 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379530 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379548 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379560 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379616 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379628 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379643 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379703 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379721 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379732 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379748 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379791 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379814 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379826 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379883 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379896 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="extract-utilities" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379918 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379930 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.379994 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380014 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380042 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380057 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="extract-content" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380414 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5917f75-6117-4adb-a85e-6d40a331ef66" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380474 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="af92a560-a657-450c-b3ad-baa6233127aa" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380500 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="eddfcdef-6299-4eae-b4a2-6a5d3b5f41be" containerName="pruner" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380518 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="93fd39e7-abb5-409e-8eed-e7757f484c00" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.380568 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e0ed157-f5bd-43a5-b641-bfa4e8df62ff" containerName="registry-server" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405035 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405140 5121 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405178 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.405913 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406064 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406135 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406147 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406162 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406259 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" gracePeriod=15 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406163 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406421 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406445 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406467 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406479 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406498 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406511 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406564 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406575 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406588 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406600 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406617 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406631 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406700 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.406712 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407048 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407085 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407099 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407112 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407126 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407145 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407164 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407443 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407466 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407482 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407495 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407778 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.407803 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.417845 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.432478 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.460372 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.581686 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582144 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582161 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582189 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582210 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582246 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582268 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582298 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.582332 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683473 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683578 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683628 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683666 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683692 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683712 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683765 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683842 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683971 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.683995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684042 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684067 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684108 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684138 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684163 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684382 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.684409 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.755409 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:12:01 crc kubenswrapper[5121]: E0218 00:12:01.776595 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952ed539fccf63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,LastTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.934421 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"81d28147ad2807675d04e889a96b71e411c71303aba30f03797aa880c74c1b14"} Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.937032 5121 generic.go:358] "Generic (PLEG): container finished" podID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerID="4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.937147 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerDied","Data":"4eb115528e4fd974007b9bde92fbada37ac8156d5ea4611a9c0460525bffd207"} Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.938605 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.938979 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.940856 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.942580 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944299 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944326 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944336 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" exitCode=0 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944347 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" exitCode=2 Feb 18 00:12:01 crc kubenswrapper[5121]: I0218 00:12:01.944435 5121 scope.go:117] "RemoveContainer" containerID="b7366f5cf688f97985f6c7abbde284b0fb77f17b0fd9e45b1408b00014bd9174" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.049195 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.049741 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050405 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050720 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.050969 5121 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.051000 5121 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.051188 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="200ms" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.252508 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="400ms" Feb 18 00:12:02 crc kubenswrapper[5121]: E0218 00:12:02.653864 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="800ms" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.955386 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.958640 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513"} Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.959484 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:02 crc kubenswrapper[5121]: I0218 00:12:02.960235 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.277823 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.279284 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.279837 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409260 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409383 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock" (OuterVolumeSpecName: "var-lock") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409438 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") pod \"faf5ed14-3492-463d-bc62-731d0d1e198e\" (UID: \"faf5ed14-3492-463d-bc62-731d0d1e198e\") " Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.409499 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.410373 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.410410 5121 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/faf5ed14-3492-463d-bc62-731d0d1e198e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.418816 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "faf5ed14-3492-463d-bc62-731d0d1e198e" (UID: "faf5ed14-3492-463d-bc62-731d0d1e198e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:12:03 crc kubenswrapper[5121]: E0218 00:12:03.455326 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="1.6s" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.511620 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/faf5ed14-3492-463d-bc62-731d0d1e198e-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.912683 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.914171 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.914925 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.915344 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.915849 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967153 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"faf5ed14-3492-463d-bc62-731d0d1e198e","Type":"ContainerDied","Data":"e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db"} Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967212 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d282b42b1bc1c669b232646281af0e365c60d08a476eec16dcbde26bb4f8db" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.967210 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.971263 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972311 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" exitCode=0 Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972445 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.972517 5121 scope.go:117] "RemoveContainer" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.993280 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.993880 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.994327 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:03 crc kubenswrapper[5121]: I0218 00:12:03.998743 5121 scope.go:117] "RemoveContainer" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021055 5121 scope.go:117] "RemoveContainer" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021802 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021803 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021869 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021954 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.021998 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022020 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022037 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022164 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022452 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022513 5121 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022526 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.022802 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.027897 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.036397 5121 scope.go:117] "RemoveContainer" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.050422 5121 scope.go:117] "RemoveContainer" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.067064 5121 scope.go:117] "RemoveContainer" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.123605 5121 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.123714 5121 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.140932 5121 scope.go:117] "RemoveContainer" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.141421 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": container with ID starting with c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de not found: ID does not exist" containerID="c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141488 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de"} err="failed to get container status \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": rpc error: code = NotFound desc = could not find container \"c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de\": container with ID starting with c45dedd41bfcd443ffbe0da271804256c523aa4decf0a64f100cfb1db25011de not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141530 5121 scope.go:117] "RemoveContainer" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.141937 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": container with ID starting with 4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc not found: ID does not exist" containerID="4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.141983 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc"} err="failed to get container status \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": rpc error: code = NotFound desc = could not find container \"4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc\": container with ID starting with 4a17bdd4b6e3a65523785940fc4a8e58fabf949fd8736dfb5ea2518fb377eebc not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142007 5121 scope.go:117] "RemoveContainer" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142282 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": container with ID starting with b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e not found: ID does not exist" containerID="b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142340 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e"} err="failed to get container status \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": rpc error: code = NotFound desc = could not find container \"b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e\": container with ID starting with b7aebc801cdbd85b7cb6f15066b835686c20f1f3ae881c69414fd1f08c1c5e4e not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142373 5121 scope.go:117] "RemoveContainer" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142629 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": container with ID starting with 3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0 not found: ID does not exist" containerID="3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142679 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0"} err="failed to get container status \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": rpc error: code = NotFound desc = could not find container \"3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0\": container with ID starting with 3eba656e816421323635e9a4e042eb5817a18cba98087d29936459c49a111ab0 not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.142695 5121 scope.go:117] "RemoveContainer" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.142960 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": container with ID starting with ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9 not found: ID does not exist" containerID="ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143003 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9"} err="failed to get container status \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": rpc error: code = NotFound desc = could not find container \"ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9\": container with ID starting with ff68ac1391946cfd61be45d0ce7e8fb0512a7d9cd4cd66d5df4da72c32403ef9 not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143028 5121 scope.go:117] "RemoveContainer" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: E0218 00:12:04.143286 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": container with ID starting with 02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc not found: ID does not exist" containerID="02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.143313 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc"} err="failed to get container status \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": rpc error: code = NotFound desc = could not find container \"02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc\": container with ID starting with 02369defc9edb4b04f7aa49566db564615c4a32438943900b3a32aa535308bbc not found: ID does not exist" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.303192 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.303758 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:04 crc kubenswrapper[5121]: I0218 00:12:04.304181 5121 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:05 crc kubenswrapper[5121]: E0218 00:12:05.056599 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="3.2s" Feb 18 00:12:05 crc kubenswrapper[5121]: I0218 00:12:05.278047 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 18 00:12:07 crc kubenswrapper[5121]: I0218 00:12:07.277876 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:07 crc kubenswrapper[5121]: I0218 00:12:07.278167 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:08 crc kubenswrapper[5121]: E0218 00:12:08.258545 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="6.4s" Feb 18 00:12:10 crc kubenswrapper[5121]: E0218 00:12:10.370799 5121 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.154:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18952ed539fccf63 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,LastTimestamp:2026-02-18 00:12:01.775939427 +0000 UTC m=+205.290397172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062576 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062935 5121 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119" exitCode=1 Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.062997 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119"} Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.063761 5121 scope.go:117] "RemoveContainer" containerID="33dbe15930a0f859dfbb35e0f7a31c71bc1e0e9561027580ec4a2f6aaef4e119" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.064356 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.065156 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.065904 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.475196 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" containerID="cri-o://76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" gracePeriod=15 Feb 18 00:12:14 crc kubenswrapper[5121]: E0218 00:12:14.659920 5121 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.154:6443: connect: connection refused" interval="7s" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.898547 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.899489 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900009 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900279 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.900562 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984776 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984883 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984932 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.984955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985026 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985167 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985593 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985619 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985642 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985692 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985798 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985873 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985920 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985980 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") pod \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\" (UID: \"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b\") " Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.985990 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986566 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986933 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986969 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.986991 5121 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.987433 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.987900 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.993863 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c" (OuterVolumeSpecName: "kube-api-access-hgp4c") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "kube-api-access-hgp4c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.994422 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.994992 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.995633 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.998389 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.998925 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:14 crc kubenswrapper[5121]: I0218 00:12:14.999497 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:14.999990 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.000240 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" (UID: "3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.073922 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.074082 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a994f43d642a705311d5e65d88f6f4804223e5f90573b51426bf56f7acbbd43c"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075469 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075791 5121 generic.go:358] "Generic (PLEG): container finished" podID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" exitCode=0 Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075874 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075910 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerDied","Data":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075965 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" event={"ID":"3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b","Type":"ContainerDied","Data":"7bc05f9957f09f27cee7504d54470ecd9c12fb4c5e2801caea1078ac4942d85e"} Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075958 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.075996 5121 scope.go:117] "RemoveContainer" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.076472 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.076846 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077284 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077577 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.077923 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.078273 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088214 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgp4c\" (UniqueName: \"kubernetes.io/projected/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-kube-api-access-hgp4c\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088242 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088254 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088266 5121 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088277 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088286 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088297 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088308 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088318 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088328 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.088340 5121 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.103569 5121 scope.go:117] "RemoveContainer" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: E0218 00:12:15.103985 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": container with ID starting with 76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4 not found: ID does not exist" containerID="76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.104021 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4"} err="failed to get container status \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": rpc error: code = NotFound desc = could not find container \"76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4\": container with ID starting with 76c27903e3dbbe473c11a7756d9e4b829d5e732836bd5e8ed1f7d11592c051d4 not found: ID does not exist" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.110788 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.111793 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.112397 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:15 crc kubenswrapper[5121]: I0218 00:12:15.112849 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.270591 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.272744 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.273822 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.274762 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.275391 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.295506 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.295574 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:16 crc kubenswrapper[5121]: E0218 00:12:16.296435 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: I0218 00:12:16.296958 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:16 crc kubenswrapper[5121]: W0218 00:12:16.332728 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39 WatchSource:0}: Error finding container 74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39: Status 404 returned error can't find the container with id 74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39 Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.101506 5121 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="ac1a72c0c5c278f20dbc3c4c72881272e9e0be75d7ca2779356b125b5a60949c" exitCode=0 Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.101718 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"ac1a72c0c5c278f20dbc3c4c72881272e9e0be75d7ca2779356b125b5a60949c"} Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102065 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"74bd00605b40aca8a51681e9b05a7b5a29d232f7944c2114cd671d4a4f792b39"} Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102635 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.102832 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:17 crc kubenswrapper[5121]: E0218 00:12:17.103558 5121 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.103593 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.104280 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.104675 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.105110 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.284896 5121 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.285635 5121 status_manager.go:895] "Failed to get status for pod" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.286858 5121 status_manager.go:895] "Failed to get status for pod" podUID="57755cc5f99000cc11e193051474d4e2" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.287876 5121 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.288463 5121 status_manager.go:895] "Failed to get status for pod" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" pod="openshift-authentication/oauth-openshift-66458b6674-m7q6l" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-m7q6l\": dial tcp 38.102.83.154:6443: connect: connection refused" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.918925 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:17 crc kubenswrapper[5121]: I0218 00:12:17.923847 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124785 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"797b35d71eacde3014a22c3b6925a5573cec902ea796336f0eb7a99594ef5b18"} Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124829 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"0627a53e2013a9609ac288d82ec104c66adf2f4affe28fb594643a1f1039e275"} Feb 18 00:12:18 crc kubenswrapper[5121]: I0218 00:12:18.124844 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.133425 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6b69b85da8f3de930f6019987007910dda487e14db4f3739da94ed0aa052090b"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.134958 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4ab93789cd85420e2eb73bbeec247813eb3b04296fe03601e5e141c99d58f846"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135053 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"119f968bcc455ee0e9e5d7defdf75ad2b46f92777cf122d83ffbe5d44f9b9acf"} Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135138 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.133761 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:19 crc kubenswrapper[5121]: I0218 00:12:19.135289 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.298118 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.298448 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:21 crc kubenswrapper[5121]: I0218 00:12:21.310558 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.167820 5121 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.168272 5121 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:24 crc kubenswrapper[5121]: I0218 00:12:24.227546 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="22f0f020-6cd6-4056-9ee7-3a201b72fafc" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.174829 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.174876 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:25 crc kubenswrapper[5121]: I0218 00:12:25.181911 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:26 crc kubenswrapper[5121]: I0218 00:12:26.181203 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:26 crc kubenswrapper[5121]: I0218 00:12:26.181257 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:27 crc kubenswrapper[5121]: I0218 00:12:27.294111 5121 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="22f0f020-6cd6-4056-9ee7-3a201b72fafc" Feb 18 00:12:29 crc kubenswrapper[5121]: I0218 00:12:29.142892 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 18 00:12:33 crc kubenswrapper[5121]: I0218 00:12:33.975962 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.251192 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.454477 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.502726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.544333 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.544459 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.765627 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:12:34 crc kubenswrapper[5121]: I0218 00:12:34.913149 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.186806 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.299130 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.396538 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.738151 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 18 00:12:35 crc kubenswrapper[5121]: I0218 00:12:35.962374 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.074630 5121 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.090267 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.196734 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.228716 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.331533 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.435303 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.517804 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.566474 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.696390 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.732765 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.785640 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.844697 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.863449 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 18 00:12:36 crc kubenswrapper[5121]: I0218 00:12:36.958718 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.041462 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.182953 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.301171 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.428086 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.432487 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.438078 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.510559 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.849774 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.926108 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 18 00:12:37 crc kubenswrapper[5121]: I0218 00:12:37.999757 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.089519 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.095026 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.186253 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.266788 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.272060 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.329321 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.357350 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.410134 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.516716 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.770932 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.783401 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.789426 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.821006 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.822331 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.891020 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 18 00:12:38 crc kubenswrapper[5121]: I0218 00:12:38.939277 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.090748 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.122029 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.130625 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.206806 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.246787 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.279397 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.306300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.444476 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.536422 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.740882 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.743480 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 18 00:12:39 crc kubenswrapper[5121]: I0218 00:12:39.919830 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.136859 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.186851 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.309064 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.349896 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.508729 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.600980 5121 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.601480 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.616595 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.640946 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.647187 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.671541 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.678185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.679405 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.695366 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.707386 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.714690 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.748638 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.812638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.839992 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.853547 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 18 00:12:40 crc kubenswrapper[5121]: I0218 00:12:40.999047 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.082381 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.195384 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.224390 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.243426 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.287607 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.306094 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.322941 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.348578 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.414886 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.451619 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.470775 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.607640 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.638585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.651134 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.878859 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.917950 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.966959 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 18 00:12:41 crc kubenswrapper[5121]: I0218 00:12:41.969096 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.076228 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.134886 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.178611 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.243498 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.261702 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.317077 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.365206 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.498833 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.564152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.604810 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.621673 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.626203 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.763084 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.795866 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 18 00:12:42 crc kubenswrapper[5121]: I0218 00:12:42.993701 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.166694 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.181793 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.193093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.239869 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.285302 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.308028 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.347177 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.414354 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.488726 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.499040 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.521596 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.523302 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.542190 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.551298 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.585122 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.627538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.800680 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.872795 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.890758 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.894437 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.912305 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.923539 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 18 00:12:43 crc kubenswrapper[5121]: I0218 00:12:43.978849 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.032256 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.061783 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.101731 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.150521 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.152592 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.172705 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.356055 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.388930 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.419310 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.429228 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.433545 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.499902 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.604663 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.730524 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.746167 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.753948 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.770473 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.791023 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.792546 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 18 00:12:44 crc kubenswrapper[5121]: I0218 00:12:44.873904 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.084289 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.143072 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.166022 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.179024 5121 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.206989 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.245682 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.460965 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.621163 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.763847 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.789447 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.856429 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.857093 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.869795 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.877319 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 18 00:12:45 crc kubenswrapper[5121]: I0218 00:12:45.924448 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.009075 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.109941 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.150847 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.178048 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.187148 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.229909 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.246033 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.441585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.541417 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.551670 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.658163 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.658230 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.767891 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.824635 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.824947 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.876423 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.919871 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.975300 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 18 00:12:46 crc kubenswrapper[5121]: I0218 00:12:46.975928 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.031050 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.110695 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.240928 5121 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.375730 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.436828 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.446599 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.497560 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.556638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.570981 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.659872 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.855638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.928644 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.939017 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 18 00:12:47 crc kubenswrapper[5121]: I0218 00:12:47.941921 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.017298 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.049707 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.058380 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.109880 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.185503 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.352891 5121 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.354763 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=47.354738884 podStartE2EDuration="47.354738884s" podCreationTimestamp="2026-02-18 00:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:24.183480181 +0000 UTC m=+227.697937916" watchObservedRunningTime="2026-02-18 00:12:48.354738884 +0000 UTC m=+251.869196649" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.360707 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-m7q6l"] Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.360784 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-5598d4f74c-wh9tq"] Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361423 5121 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361459 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="557bb62e-e0a8-4dc6-9693-f1480c510930" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361826 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361854 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361894 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.361906 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.362056 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" containerName="oauth-openshift" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.362091 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="faf5ed14-3492-463d-bc62-731d0d1e198e" containerName="installer" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.374898 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.374967 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.376850 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.379927 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.379962 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380185 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380214 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380265 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.380878 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381014 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381057 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381129 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381193 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381638 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.381645 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.391776 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.404525 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.410395 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.41037239 podStartE2EDuration="24.41037239s" podCreationTimestamp="2026-02-18 00:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:48.405797548 +0000 UTC m=+251.920255323" watchObservedRunningTime="2026-02-18 00:12:48.41037239 +0000 UTC m=+251.924830165" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.457377 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.466905 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.466967 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467066 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467131 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467201 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467296 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467354 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467408 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467452 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467548 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467637 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467734 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467776 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.467826 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.475601 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.541798 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.562103 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569190 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569234 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569275 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569306 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569330 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569634 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569856 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569899 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569928 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569973 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570018 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.569984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-dir\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570045 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570213 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.570995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.571196 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.572598 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.573539 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-audit-policies\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.580015 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-login\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.580241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-session\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581161 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581742 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.581873 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.582341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.587246 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-error\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.591343 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.600404 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm4tt\" (UniqueName: \"kubernetes.io/projected/4aa710d2-ba83-4fc7-ac7f-ed51869a02bd-kube-api-access-rm4tt\") pod \"oauth-openshift-5598d4f74c-wh9tq\" (UID: \"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd\") " pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.607186 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.694057 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.882634 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 18 00:12:48 crc kubenswrapper[5121]: I0218 00:12:48.923282 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5598d4f74c-wh9tq"] Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.135202 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.286291 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b" path="/var/lib/kubelet/pods/3752dabb-a8c0-4f96-8ec2-672d6a3e4f9b/volumes" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347175 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" event={"ID":"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd","Type":"ContainerStarted","Data":"3d65e9575f65a399a4f8415e38d835e5c7973eebe97c3322818f8376a268c985"} Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347249 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" event={"ID":"4aa710d2-ba83-4fc7-ac7f-ed51869a02bd","Type":"ContainerStarted","Data":"ff01553fed282844a7bb9f68b1c30560ae8b700b77316253f03a651df3c1fc6e"} Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.347485 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.406261 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" podStartSLOduration=60.406232865 podStartE2EDuration="1m0.406232865s" podCreationTimestamp="2026-02-18 00:11:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:12:49.405781023 +0000 UTC m=+252.920238778" watchObservedRunningTime="2026-02-18 00:12:49.406232865 +0000 UTC m=+252.920690600" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.452360 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.584835 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 18 00:12:49 crc kubenswrapper[5121]: I0218 00:12:49.784572 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.216538 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.308909 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.347849 5121 patch_prober.go:28] interesting pod/oauth-openshift-5598d4f74c-wh9tq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded" start-of-body= Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.348001 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" podUID="4aa710d2-ba83-4fc7-ac7f-ed51869a02bd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.57:6443/healthz\": context deadline exceeded" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.453957 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.528943 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:12:50 crc kubenswrapper[5121]: I0218 00:12:50.641009 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5598d4f74c-wh9tq" Feb 18 00:12:51 crc kubenswrapper[5121]: I0218 00:12:51.292273 5121 ???:1] "http: TLS handshake error from 192.168.126.11:46500: no serving certificate available for the kubelet" Feb 18 00:12:58 crc kubenswrapper[5121]: I0218 00:12:58.064580 5121 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:12:58 crc kubenswrapper[5121]: I0218 00:12:58.065858 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" gracePeriod=5 Feb 18 00:13:00 crc kubenswrapper[5121]: I0218 00:13:00.287985 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.450027 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.450089 5121 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" exitCode=137 Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.661951 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.662099 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.739789 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.739955 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740041 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740057 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740150 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740213 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740273 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740393 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740531 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740939 5121 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740957 5121 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740966 5121 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.740976 5121 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.750698 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:13:03 crc kubenswrapper[5121]: I0218 00:13:03.842064 5121 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460062 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460270 5121 scope.go:117] "RemoveContainer" containerID="cc677b82d1e2454ba638c63b5c80bd5425ccacb3319e965b00d02d7e3b42f513" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.460310 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.545360 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:13:04 crc kubenswrapper[5121]: I0218 00:13:04.545471 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.280249 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.280973 5121 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.297466 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.297556 5121 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bbb88bd-16dc-4dd3-aec8-8aac7cffee69" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.321465 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.321575 5121 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="5bbb88bd-16dc-4dd3-aec8-8aac7cffee69" Feb 18 00:13:05 crc kubenswrapper[5121]: I0218 00:13:05.658796 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 18 00:13:07 crc kubenswrapper[5121]: I0218 00:13:07.701422 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 18 00:13:07 crc kubenswrapper[5121]: I0218 00:13:07.860130 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.501989 5121 generic.go:358] "Generic (PLEG): container finished" podID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" exitCode=0 Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.502063 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} Feb 18 00:13:09 crc kubenswrapper[5121]: I0218 00:13:09.503019 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:13:10 crc kubenswrapper[5121]: I0218 00:13:10.145111 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.603944 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.607993 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerStarted","Data":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.609104 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:13:11 crc kubenswrapper[5121]: I0218 00:13:11.610512 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:13:12 crc kubenswrapper[5121]: I0218 00:13:12.191238 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 18 00:13:12 crc kubenswrapper[5121]: I0218 00:13:12.808574 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.791902 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.792825 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" containerID="cri-o://4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" gracePeriod=30 Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.797724 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:14 crc kubenswrapper[5121]: I0218 00:13:14.798132 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" containerID="cri-o://9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" gracePeriod=30 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.250264 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.293275 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294784 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294809 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294862 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.294870 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.295213 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.295240 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerName="controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.301464 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.341953 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.343206 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378486 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378539 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378685 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378712 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.378758 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") pod \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\" (UID: \"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379374 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379491 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config" (OuterVolumeSpecName: "config") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379597 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.379991 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp" (OuterVolumeSpecName: "tmp") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380177 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca" (OuterVolumeSpecName: "client-ca") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380287 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380412 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380474 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380543 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380678 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380704 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380720 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.380724 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.381066 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cc530ba0-1249-4787-8584-22f866581116" containerName="route-controller-manager" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382222 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382260 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382272 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.382284 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.386151 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p" (OuterVolumeSpecName: "kube-api-access-p4t5p") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "kube-api-access-p4t5p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.386813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" (UID: "ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.390233 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.392796 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483411 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483535 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483611 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483640 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483720 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") pod \"cc530ba0-1249-4787-8584-22f866581116\" (UID: \"cc530ba0-1249-4787-8584-22f866581116\") " Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.483828 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484499 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484732 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484809 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484872 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca" (OuterVolumeSpecName: "client-ca") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.484982 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485159 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485009 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config" (OuterVolumeSpecName: "config") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485354 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485416 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485492 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485615 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485696 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485801 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485822 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485841 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p4t5p\" (UniqueName: \"kubernetes.io/projected/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69-kube-api-access-p4t5p\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.485865 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cc530ba0-1249-4787-8584-22f866581116-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486390 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486528 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp" (OuterVolumeSpecName: "tmp") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.486765 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.487168 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.487481 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.488120 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89" (OuterVolumeSpecName: "kube-api-access-gcc89") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "kube-api-access-gcc89". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.491022 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cc530ba0-1249-4787-8584-22f866581116" (UID: "cc530ba0-1249-4787-8584-22f866581116"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.491976 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.506046 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"controller-manager-d98bfc97f-8nq5f\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.591618 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.592521 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593069 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593455 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594059 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gcc89\" (UniqueName: \"kubernetes.io/projected/cc530ba0-1249-4787-8584-22f866581116-kube-api-access-gcc89\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594218 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cc530ba0-1249-4787-8584-22f866581116-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594355 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cc530ba0-1249-4787-8584-22f866581116-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.594107 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.593835 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.597807 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.601299 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.613491 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"route-controller-manager-7c6447df94-58994\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640433 5121 generic.go:358] "Generic (PLEG): container finished" podID="cc530ba0-1249-4787-8584-22f866581116" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" exitCode=0 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640513 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerDied","Data":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640551 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" event={"ID":"cc530ba0-1249-4787-8584-22f866581116","Type":"ContainerDied","Data":"8d1102fcfeb79cd77d3c6e57c849eb271508e3c0765df11f609eff905e5d5dc8"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640558 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.640575 5121 scope.go:117] "RemoveContainer" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.642929 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643357 5121 generic.go:358] "Generic (PLEG): container finished" podID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" exitCode=0 Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643481 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerDied","Data":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.643762 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-x8c88" event={"ID":"ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69","Type":"ContainerDied","Data":"b6c7133a45049781cc836afe18dc873f928b6354af744750076b3f10ff4b77ed"} Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.674143 5121 scope.go:117] "RemoveContainer" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: E0218 00:13:15.675220 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": container with ID starting with 9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79 not found: ID does not exist" containerID="9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.675263 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79"} err="failed to get container status \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": rpc error: code = NotFound desc = could not find container \"9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79\": container with ID starting with 9c177f14424f3611a0eea419046770f4c044b4fedcd1887c23d6919ee4372a79 not found: ID does not exist" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.675290 5121 scope.go:117] "RemoveContainer" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.687158 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.699616 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-w48qb"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.704510 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.709291 5121 scope.go:117] "RemoveContainer" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: E0218 00:13:15.709919 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": container with ID starting with 4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6 not found: ID does not exist" containerID="4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.709970 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6"} err="failed to get container status \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": rpc error: code = NotFound desc = could not find container \"4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6\": container with ID starting with 4a26e4c396b0a251a218e16482117c3308a2c158d69e53d952886e41ec0460a6 not found: ID does not exist" Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.715192 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.722176 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-x8c88"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.923204 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:13:15 crc kubenswrapper[5121]: I0218 00:13:15.945310 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:13:15 crc kubenswrapper[5121]: W0218 00:13:15.949939 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1792aaaf_7683_495e_9fab_d35daee8eac0.slice/crio-2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc WatchSource:0}: Error finding container 2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc: Status 404 returned error can't find the container with id 2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.659033 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerStarted","Data":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.659590 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerStarted","Data":"2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.660047 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664447 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerStarted","Data":"05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664534 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerStarted","Data":"01518e01f94c0717f14956ba308198eb334de1750e195936b1a5d46a78a8b446"} Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.664795 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.695219 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.699028 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" podStartSLOduration=1.698991292 podStartE2EDuration="1.698991292s" podCreationTimestamp="2026-02-18 00:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:13:16.686277556 +0000 UTC m=+280.200735321" watchObservedRunningTime="2026-02-18 00:13:16.698991292 +0000 UTC m=+280.213449097" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.724368 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" podStartSLOduration=1.724334544 podStartE2EDuration="1.724334544s" podCreationTimestamp="2026-02-18 00:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:13:16.71636222 +0000 UTC m=+280.230819985" watchObservedRunningTime="2026-02-18 00:13:16.724334544 +0000 UTC m=+280.238792349" Feb 18 00:13:16 crc kubenswrapper[5121]: I0218 00:13:16.796594 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.282094 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc530ba0-1249-4787-8584-22f866581116" path="/var/lib/kubelet/pods/cc530ba0-1249-4787-8584-22f866581116/volumes" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.283930 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69" path="/var/lib/kubelet/pods/ec21d65e-1eab-42a8-bb64-e6f9ba7b5c69/volumes" Feb 18 00:13:17 crc kubenswrapper[5121]: I0218 00:13:17.354040 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:18 crc kubenswrapper[5121]: I0218 00:13:18.791028 5121 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 18 00:13:21 crc kubenswrapper[5121]: I0218 00:13:21.764825 5121 ???:1] "http: TLS handshake error from 192.168.126.11:50076: no serving certificate available for the kubelet" Feb 18 00:13:24 crc kubenswrapper[5121]: I0218 00:13:24.633461 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 18 00:13:25 crc kubenswrapper[5121]: I0218 00:13:25.011145 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:28 crc kubenswrapper[5121]: I0218 00:13:28.528290 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.545064 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.546142 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.546230 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.547326 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.547438 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" gracePeriod=600 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.838075 5121 generic.go:358] "Generic (PLEG): container finished" podID="4000e83d-77d2-4372-93a4-5dbb22251239" containerID="c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234" exitCode=0 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.838805 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerDied","Data":"c763fd6dfa3e272df9c90c9104d067c6998b90e0c16d5d9f5c113fd96ac3d234"} Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.843794 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" exitCode=0 Feb 18 00:13:34 crc kubenswrapper[5121]: I0218 00:13:34.843916 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97"} Feb 18 00:13:35 crc kubenswrapper[5121]: I0218 00:13:35.856282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.245391 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.352550 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") pod \"4000e83d-77d2-4372-93a4-5dbb22251239\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.353044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") pod \"4000e83d-77d2-4372-93a4-5dbb22251239\" (UID: \"4000e83d-77d2-4372-93a4-5dbb22251239\") " Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.354104 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca" (OuterVolumeSpecName: "serviceca") pod "4000e83d-77d2-4372-93a4-5dbb22251239" (UID: "4000e83d-77d2-4372-93a4-5dbb22251239"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.364187 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp" (OuterVolumeSpecName: "kube-api-access-9nvwp") pod "4000e83d-77d2-4372-93a4-5dbb22251239" (UID: "4000e83d-77d2-4372-93a4-5dbb22251239"). InnerVolumeSpecName "kube-api-access-9nvwp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.454379 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9nvwp\" (UniqueName: \"kubernetes.io/projected/4000e83d-77d2-4372-93a4-5dbb22251239-kube-api-access-9nvwp\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.454425 5121 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4000e83d-77d2-4372-93a4-5dbb22251239-serviceca\") on node \"crc\" DevicePath \"\"" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.869099 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29522880-hmpf4" Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.870480 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29522880-hmpf4" event={"ID":"4000e83d-77d2-4372-93a4-5dbb22251239","Type":"ContainerDied","Data":"3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0"} Feb 18 00:13:36 crc kubenswrapper[5121]: I0218 00:13:36.870564 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3687564e37fbbf3ead5e98e35201f7bb38d703cba012611a2342fb57cfe0c5c0" Feb 18 00:13:37 crc kubenswrapper[5121]: I0218 00:13:37.441360 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:13:37 crc kubenswrapper[5121]: I0218 00:13:37.442070 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:14:02 crc kubenswrapper[5121]: I0218 00:14:02.896824 5121 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 18 00:14:14 crc kubenswrapper[5121]: I0218 00:14:14.812093 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:14 crc kubenswrapper[5121]: I0218 00:14:14.813148 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" containerID="cri-o://05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" gracePeriod=30 Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.141024 5121 generic.go:358] "Generic (PLEG): container finished" podID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerID="05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" exitCode=0 Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.141214 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerDied","Data":"05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55"} Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.292584 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.327793 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.328602 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330631 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330756 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.330865 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.331037 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="4000e83d-77d2-4372-93a4-5dbb22251239" containerName="image-pruner" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.331114 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" containerName="controller-manager" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.338015 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.341699 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419387 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.419421 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420306 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420372 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420432 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") pod \"93f589c8-9d36-4f32-99ff-de8809c4d470\" (UID: \"93f589c8-9d36-4f32-99ff-de8809c4d470\") " Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420558 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420594 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420630 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420715 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.420829 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421532 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421579 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config" (OuterVolumeSpecName: "config") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.421721 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca" (OuterVolumeSpecName: "client-ca") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.422215 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp" (OuterVolumeSpecName: "tmp") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.428610 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.428607 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4" (OuterVolumeSpecName: "kube-api-access-dw9g4") pod "93f589c8-9d36-4f32-99ff-de8809c4d470" (UID: "93f589c8-9d36-4f32-99ff-de8809c4d470"). InnerVolumeSpecName "kube-api-access-dw9g4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522413 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522728 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.522945 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523033 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523086 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523153 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dw9g4\" (UniqueName: \"kubernetes.io/projected/93f589c8-9d36-4f32-99ff-de8809c4d470-kube-api-access-dw9g4\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523173 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523193 5121 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523210 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/93f589c8-9d36-4f32-99ff-de8809c4d470-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523231 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93f589c8-9d36-4f32-99ff-de8809c4d470-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523247 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/93f589c8-9d36-4f32-99ff-de8809c4d470-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.523989 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-tmp\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.524548 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-client-ca\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.524839 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-proxy-ca-bundles\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.525186 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-config\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.526982 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-serving-cert\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.554277 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn745\" (UniqueName: \"kubernetes.io/projected/1ae817ad-1e3f-4521-a4d1-fcde6fca37e0-kube-api-access-hn745\") pod \"controller-manager-55787dc5fc-68vkf\" (UID: \"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0\") " pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.657809 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.965413 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55787dc5fc-68vkf"] Feb 18 00:14:15 crc kubenswrapper[5121]: I0218 00:14:15.981423 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.149830 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" event={"ID":"93f589c8-9d36-4f32-99ff-de8809c4d470","Type":"ContainerDied","Data":"01518e01f94c0717f14956ba308198eb334de1750e195936b1a5d46a78a8b446"} Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.149933 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d98bfc97f-8nq5f" Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.150287 5121 scope.go:117] "RemoveContainer" containerID="05928211444dba2de42d9bbac2c9153fe73aa531d684a4893aa7533a4d5efd55" Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.151897 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" event={"ID":"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0","Type":"ContainerStarted","Data":"2bab1c8ca9c8e135a847b31f6d8c10a1833ffa9ec6748b93b2bbcfb587c8987e"} Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.195013 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:16 crc kubenswrapper[5121]: I0218 00:14:16.199763 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d98bfc97f-8nq5f"] Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.166804 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" event={"ID":"1ae817ad-1e3f-4521-a4d1-fcde6fca37e0","Type":"ContainerStarted","Data":"18ed351560fac002a026b73bf851cf642629b2a043f88f986437752b73a13e53"} Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.167223 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.175302 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.200182 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55787dc5fc-68vkf" podStartSLOduration=3.199836165 podStartE2EDuration="3.199836165s" podCreationTimestamp="2026-02-18 00:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:17.196907508 +0000 UTC m=+340.711365353" watchObservedRunningTime="2026-02-18 00:14:17.199836165 +0000 UTC m=+340.714293910" Feb 18 00:14:17 crc kubenswrapper[5121]: I0218 00:14:17.287556 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93f589c8-9d36-4f32-99ff-de8809c4d470" path="/var/lib/kubelet/pods/93f589c8-9d36-4f32-99ff-de8809c4d470/volumes" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.562882 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.566347 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rdts" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" containerID="cri-o://c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.569541 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.569991 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttn8q" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" containerID="cri-o://1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.596259 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.596547 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" containerID="cri-o://d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.618201 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.618514 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q4gm2" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" containerID="cri-o://6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.632187 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.632542 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pvff2" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" containerID="cri-o://2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" gracePeriod=30 Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.643023 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.660983 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.661187 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.703619 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pvff2" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" probeResult="failure" output="" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757045 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757111 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757147 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.757180 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858899 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.858935 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.859046 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.860917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-tmp\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.861108 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.875954 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.879832 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7m4\" (UniqueName: \"kubernetes.io/projected/2265e28f-7cec-4dde-b4c4-be79e7d2ccd2-kube-api-access-tx7m4\") pod \"marketplace-operator-547dbd544d-kdn9c\" (UID: \"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.957793 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:23 crc kubenswrapper[5121]: I0218 00:14:23.966960 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063625 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063787 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.063850 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") pod \"40bc3a2a-4cd6-44f6-beca-0193584836a9\" (UID: \"40bc3a2a-4cd6-44f6-beca-0193584836a9\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.075418 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities" (OuterVolumeSpecName: "utilities") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.084827 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm" (OuterVolumeSpecName: "kube-api-access-tddwm") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "kube-api-access-tddwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.127381 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40bc3a2a-4cd6-44f6-beca-0193584836a9" (UID: "40bc3a2a-4cd6-44f6-beca-0193584836a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.162641 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165593 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165617 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40bc3a2a-4cd6-44f6-beca-0193584836a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.165629 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tddwm\" (UniqueName: \"kubernetes.io/projected/40bc3a2a-4cd6-44f6-beca-0193584836a9-kube-api-access-tddwm\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.173449 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.203021 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.256429 5121 generic.go:358] "Generic (PLEG): container finished" podID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerID="2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.256679 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.267103 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.267225 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.281821 5121 generic.go:358] "Generic (PLEG): container finished" podID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282021 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttn8q" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282460 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttn8q" event={"ID":"6854ad9b-1632-47d4-82bc-bdd90768bc2a","Type":"ContainerDied","Data":"0bd1783c1b1ab6e83b15babe5655625d9f53bc4766e79d5d4aa97e04c701fcdd"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.282515 5121 scope.go:117] "RemoveContainer" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289087 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289141 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") pod \"787ee824-3e40-4929-9eda-a58528843d28\" (UID: \"787ee824-3e40-4929-9eda-a58528843d28\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289208 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.289231 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") pod \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\" (UID: \"6854ad9b-1632-47d4-82bc-bdd90768bc2a\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.293766 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities" (OuterVolumeSpecName: "utilities") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.294866 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg" (OuterVolumeSpecName: "kube-api-access-5h9tg") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "kube-api-access-5h9tg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.293842 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities" (OuterVolumeSpecName: "utilities") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.298123 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303400 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq" (OuterVolumeSpecName: "kube-api-access-5cldq") pod "787ee824-3e40-4929-9eda-a58528843d28" (UID: "787ee824-3e40-4929-9eda-a58528843d28"). InnerVolumeSpecName "kube-api-access-5cldq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303942 5121 generic.go:358] "Generic (PLEG): container finished" podID="787ee824-3e40-4929-9eda-a58528843d28" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.303986 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.304031 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q4gm2" event={"ID":"787ee824-3e40-4929-9eda-a58528843d28","Type":"ContainerDied","Data":"214da5bd6a9db7db2a32ab1b1de05fdee8d2227271b7fb656ea202faa4b8ff5e"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.304223 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q4gm2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314602 5121 generic.go:358] "Generic (PLEG): container finished" podID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314715 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314746 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" event={"ID":"cad52ef7-8080-48a2-91e3-5bcfc007b196","Type":"ContainerDied","Data":"a35c1a8554f97c336c169b9b7ab07394eb161632ed304015d160d6c0a71bba70"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314803 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-78c6t" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.314888 5121 scope.go:117] "RemoveContainer" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.317461 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.322907 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6854ad9b-1632-47d4-82bc-bdd90768bc2a" (UID: "6854ad9b-1632-47d4-82bc-bdd90768bc2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324113 5121 generic.go:358] "Generic (PLEG): container finished" podID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" exitCode=0 Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324232 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324288 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rdts" event={"ID":"40bc3a2a-4cd6-44f6-beca-0193584836a9","Type":"ContainerDied","Data":"b7ed7dc670ad2dcb9f8640d5f44b830e13e4f0554ae87aa8ba2653124a6f77c7"} Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.324675 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rdts" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.348239 5121 scope.go:117] "RemoveContainer" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.382085 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-kdn9c"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.388706 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390021 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390092 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390160 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390230 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") pod \"cad52ef7-8080-48a2-91e3-5bcfc007b196\" (UID: \"cad52ef7-8080-48a2-91e3-5bcfc007b196\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390472 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390482 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5h9tg\" (UniqueName: \"kubernetes.io/projected/6854ad9b-1632-47d4-82bc-bdd90768bc2a-kube-api-access-5h9tg\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390494 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6854ad9b-1632-47d4-82bc-bdd90768bc2a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390505 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390516 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/787ee824-3e40-4929-9eda-a58528843d28-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.390524 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cldq\" (UniqueName: \"kubernetes.io/projected/787ee824-3e40-4929-9eda-a58528843d28-kube-api-access-5cldq\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.391671 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.391914 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp" (OuterVolumeSpecName: "tmp") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.396818 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.397002 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj" (OuterVolumeSpecName: "kube-api-access-nk5bj") pod "cad52ef7-8080-48a2-91e3-5bcfc007b196" (UID: "cad52ef7-8080-48a2-91e3-5bcfc007b196"). InnerVolumeSpecName "kube-api-access-nk5bj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.397671 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q4gm2"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.398523 5121 scope.go:117] "RemoveContainer" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.398935 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": container with ID starting with 1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc not found: ID does not exist" containerID="1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.398995 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc"} err="failed to get container status \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": rpc error: code = NotFound desc = could not find container \"1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc\": container with ID starting with 1ff1e1dde14b0aefb23f2a554c5bed26aefed3dcd996b9fdbbc507347e7af0fc not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399031 5121 scope.go:117] "RemoveContainer" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.399490 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": container with ID starting with 7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72 not found: ID does not exist" containerID="7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399527 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72"} err="failed to get container status \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": rpc error: code = NotFound desc = could not find container \"7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72\": container with ID starting with 7bbde6054c38bf25975caa9ea0d2a94aaa5c65d600164b1d0856ff6b63593d72 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.399566 5121 scope.go:117] "RemoveContainer" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.400380 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": container with ID starting with cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a not found: ID does not exist" containerID="cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.400408 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a"} err="failed to get container status \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": rpc error: code = NotFound desc = could not find container \"cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a\": container with ID starting with cac63870cc6a794113ae38fecdb0130c3e0118b99864f89ae461470215055d1a not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.400424 5121 scope.go:117] "RemoveContainer" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.404694 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.408039 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rdts"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.415055 5121 scope.go:117] "RemoveContainer" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.443843 5121 scope.go:117] "RemoveContainer" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478223 5121 scope.go:117] "RemoveContainer" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.478748 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": container with ID starting with 6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014 not found: ID does not exist" containerID="6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478796 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014"} err="failed to get container status \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": rpc error: code = NotFound desc = could not find container \"6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014\": container with ID starting with 6b0053c3d39b580d56eee0db848fdc5a97563ac37afd05ec43759f7a32348014 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.478825 5121 scope.go:117] "RemoveContainer" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.479206 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": container with ID starting with be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5 not found: ID does not exist" containerID="be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479338 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5"} err="failed to get container status \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": rpc error: code = NotFound desc = could not find container \"be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5\": container with ID starting with be6a3d9bca22a71b18e65ca71f2a6ee66d8317cad8e8946d57894eec06d333f5 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479424 5121 scope.go:117] "RemoveContainer" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.479934 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": container with ID starting with 8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a not found: ID does not exist" containerID="8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.479979 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a"} err="failed to get container status \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": rpc error: code = NotFound desc = could not find container \"8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a\": container with ID starting with 8db7beddb41676f3f7fedef2657fbc1b6573f481ea6e755b28c10795162d2d7a not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.480001 5121 scope.go:117] "RemoveContainer" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491439 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491481 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491542 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") pod \"55ab02de-5c10-4bc3-b031-3205a22662ae\" (UID: \"55ab02de-5c10-4bc3-b031-3205a22662ae\") " Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491865 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491886 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/cad52ef7-8080-48a2-91e3-5bcfc007b196-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491933 5121 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/cad52ef7-8080-48a2-91e3-5bcfc007b196-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.491946 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nk5bj\" (UniqueName: \"kubernetes.io/projected/cad52ef7-8080-48a2-91e3-5bcfc007b196-kube-api-access-nk5bj\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.493575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities" (OuterVolumeSpecName: "utilities") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.495998 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv" (OuterVolumeSpecName: "kube-api-access-xs2gv") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "kube-api-access-xs2gv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.510953 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.542121 5121 scope.go:117] "RemoveContainer" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.547305 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": container with ID starting with d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514 not found: ID does not exist" containerID="d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547355 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514"} err="failed to get container status \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": rpc error: code = NotFound desc = could not find container \"d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514\": container with ID starting with d3caa69fcbc20980ce08eee73871fe50b1a9c471e7052348a554401de825d514 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547388 5121 scope.go:117] "RemoveContainer" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.547866 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": container with ID starting with caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b not found: ID does not exist" containerID="caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547888 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b"} err="failed to get container status \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": rpc error: code = NotFound desc = could not find container \"caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b\": container with ID starting with caab4450ec0e6c64a07d50ed49998cb937df954f90c40ae698ebcdbf48d3d52b not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.547903 5121 scope.go:117] "RemoveContainer" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.568767 5121 scope.go:117] "RemoveContainer" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.589238 5121 scope.go:117] "RemoveContainer" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.593055 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.593081 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xs2gv\" (UniqueName: \"kubernetes.io/projected/55ab02de-5c10-4bc3-b031-3205a22662ae-kube-api-access-xs2gv\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611125 5121 scope.go:117] "RemoveContainer" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.611638 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": container with ID starting with c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90 not found: ID does not exist" containerID="c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611693 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90"} err="failed to get container status \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": rpc error: code = NotFound desc = could not find container \"c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90\": container with ID starting with c1523b4c523946707e80b8e868acd2fe77691e4855690744c138d43cce033d90 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.611719 5121 scope.go:117] "RemoveContainer" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.612288 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": container with ID starting with a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d not found: ID does not exist" containerID="a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612314 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d"} err="failed to get container status \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": rpc error: code = NotFound desc = could not find container \"a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d\": container with ID starting with a56cab9ec41fee13cbe814351a6588eda2b3514557958029da546e6505cd2e8d not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612329 5121 scope.go:117] "RemoveContainer" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: E0218 00:14:24.612906 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": container with ID starting with bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181 not found: ID does not exist" containerID="bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.612943 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181"} err="failed to get container status \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": rpc error: code = NotFound desc = could not find container \"bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181\": container with ID starting with bd26c314fc8a4415540c6481444fcb88a904641ea00beb4ede7fe60ef8e45181 not found: ID does not exist" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.622527 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.629401 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55ab02de-5c10-4bc3-b031-3205a22662ae" (UID: "55ab02de-5c10-4bc3-b031-3205a22662ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.629843 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttn8q"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.660396 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.665120 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-78c6t"] Feb 18 00:14:24 crc kubenswrapper[5121]: I0218 00:14:24.694600 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55ab02de-5c10-4bc3-b031-3205a22662ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.277847 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" path="/var/lib/kubelet/pods/40bc3a2a-4cd6-44f6-beca-0193584836a9/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.279265 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" path="/var/lib/kubelet/pods/6854ad9b-1632-47d4-82bc-bdd90768bc2a/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.280259 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="787ee824-3e40-4929-9eda-a58528843d28" path="/var/lib/kubelet/pods/787ee824-3e40-4929-9eda-a58528843d28/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.281629 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" path="/var/lib/kubelet/pods/cad52ef7-8080-48a2-91e3-5bcfc007b196/volumes" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343721 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pvff2" event={"ID":"55ab02de-5c10-4bc3-b031-3205a22662ae","Type":"ContainerDied","Data":"2acd9157a5c0303ad67f67ca0941df951cb9a99c9745a061c1e6e8e477768d5b"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343827 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pvff2" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.343836 5121 scope.go:117] "RemoveContainer" containerID="2f3afa63f8a1d2db678e229839567ed423614d3a81604a956ad67abe65219555" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347660 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" event={"ID":"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2","Type":"ContainerStarted","Data":"9e9ffecf2797fccc41ed4577f7462c1445330d0871b7d9d5b2303f9065e35753"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" event={"ID":"2265e28f-7cec-4dde-b4c4-be79e7d2ccd2","Type":"ContainerStarted","Data":"fb87fd6d607724ad9bf74ce3f7b633577bd6b6872463224d187dc33c7ece7778"} Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.347867 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.352905 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.368914 5121 scope.go:117] "RemoveContainer" containerID="3dd9b23da08c4dcfdd51fdb93e1c0f820b6f505f7ddee63f36bc6660f695e6b7" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.377321 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-kdn9c" podStartSLOduration=2.37730219 podStartE2EDuration="2.37730219s" podCreationTimestamp="2026-02-18 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:25.369507235 +0000 UTC m=+348.883965000" watchObservedRunningTime="2026-02-18 00:14:25.37730219 +0000 UTC m=+348.891759935" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.391064 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.408046 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pvff2"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.411228 5121 scope.go:117] "RemoveContainer" containerID="9dab05515e6db77b43d60e41519ec993edf909177c201915f71ceb9b10cf035c" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.783572 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784844 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784879 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784899 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784909 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784928 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784939 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784952 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784981 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.784998 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785010 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785022 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785031 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785043 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785051 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785074 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785083 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785100 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785109 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785122 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.785133 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="extract-content" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786583 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786614 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786670 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786681 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786702 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786711 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="extract-utilities" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786880 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="787ee824-3e40-4929-9eda-a58528843d28" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786908 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786933 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6854ad9b-1632-47d4-82bc-bdd90768bc2a" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786949 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.786966 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="40bc3a2a-4cd6-44f6-beca-0193584836a9" containerName="registry-server" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787158 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787183 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.787347 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="cad52ef7-8080-48a2-91e3-5bcfc007b196" containerName="marketplace-operator" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.798834 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.799032 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.802808 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918271 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.918297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.975519 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.991093 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.991254 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:25 crc kubenswrapper[5121]: I0218 00:14:25.993934 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019751 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019794 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019826 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019884 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019905 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.019933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.020448 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.020614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.047607 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"redhat-marketplace-9knfx\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.114445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.121536 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.122179 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.122245 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.123378 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-utilities\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.123454 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17b15350-ab27-4821-bfb5-2ca12b36c32d-catalog-content\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.145872 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7pc\" (UniqueName: \"kubernetes.io/projected/17b15350-ab27-4821-bfb5-2ca12b36c32d-kube-api-access-6w7pc\") pod \"community-operators-m24xj\" (UID: \"17b15350-ab27-4821-bfb5-2ca12b36c32d\") " pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.314496 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.608575 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:14:26 crc kubenswrapper[5121]: W0218 00:14:26.619027 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9e0e10c_e462_4d05_9e54_25f1527555c1.slice/crio-02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1 WatchSource:0}: Error finding container 02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1: Status 404 returned error can't find the container with id 02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1 Feb 18 00:14:26 crc kubenswrapper[5121]: I0218 00:14:26.734532 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m24xj"] Feb 18 00:14:26 crc kubenswrapper[5121]: W0218 00:14:26.744177 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17b15350_ab27_4821_bfb5_2ca12b36c32d.slice/crio-a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2 WatchSource:0}: Error finding container a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2: Status 404 returned error can't find the container with id a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.282835 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55ab02de-5c10-4bc3-b031-3205a22662ae" path="/var/lib/kubelet/pods/55ab02de-5c10-4bc3-b031-3205a22662ae/volumes" Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380096 5121 generic.go:358] "Generic (PLEG): container finished" podID="17b15350-ab27-4821-bfb5-2ca12b36c32d" containerID="695efa8716fbb1382b6430d1e3b3351427f8a2c793baf206a4a0b5bb40681ddf" exitCode=0 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380219 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerDied","Data":"695efa8716fbb1382b6430d1e3b3351427f8a2c793baf206a4a0b5bb40681ddf"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.380278 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerStarted","Data":"a335c7d26b4dcaf53dcf388840ebcc3c60bcdf31f359417351bb443eb7fcc6f2"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.384237 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" exitCode=0 Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.385122 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18"} Feb 18 00:14:27 crc kubenswrapper[5121]: I0218 00:14:27.385155 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerStarted","Data":"02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.177424 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.196709 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.196900 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.200118 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.269851 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.269934 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.270099 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371258 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371316 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.371339 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.372205 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-utilities\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.373167 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3bb7195-d543-4fba-bbe3-661b888f6ab3-catalog-content\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.379457 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.385543 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.388457 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.390507 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.393401 5121 generic.go:358] "Generic (PLEG): container finished" podID="17b15350-ab27-4821-bfb5-2ca12b36c32d" containerID="b0535d96b50b19d29da4e46480762c9457882317b00bf2b0fb09a9a21a955cdf" exitCode=0 Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.393494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerDied","Data":"b0535d96b50b19d29da4e46480762c9457882317b00bf2b0fb09a9a21a955cdf"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.395745 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28wz6\" (UniqueName: \"kubernetes.io/projected/b3bb7195-d543-4fba-bbe3-661b888f6ab3-kube-api-access-28wz6\") pod \"certified-operators-5hnxm\" (UID: \"b3bb7195-d543-4fba-bbe3-661b888f6ab3\") " pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.405117 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1"} Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.405314 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" exitCode=0 Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477617 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477736 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.477762 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.512738 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.522547 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.538715 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.556638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579401 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579857 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579898 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579946 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579967 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.579991 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580010 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580053 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580089 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580119 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580137 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.580869 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-utilities\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.587198 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f3e3949-ddb8-4d79-8063-8e319147d2b5-catalog-content\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.616341 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.617539 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbghz\" (UniqueName: \"kubernetes.io/projected/7f3e3949-ddb8-4d79-8063-8e319147d2b5-kube-api-access-cbghz\") pod \"redhat-operators-svl96\" (UID: \"7f3e3949-ddb8-4d79-8063-8e319147d2b5\") " pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681549 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681596 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681615 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681676 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681757 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.681775 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.683075 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/39a55eed-2143-45b6-854a-67ea1f2842d9-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.684435 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-trusted-ca\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.686890 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-tls\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.688141 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/39a55eed-2143-45b6-854a-67ea1f2842d9-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.696340 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/39a55eed-2143-45b6-854a-67ea1f2842d9-registry-certificates\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.706262 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-bound-sa-token\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.707318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntn5l\" (UniqueName: \"kubernetes.io/projected/39a55eed-2143-45b6-854a-67ea1f2842d9-kube-api-access-ntn5l\") pod \"image-registry-5d9d95bf5b-hrxzn\" (UID: \"39a55eed-2143-45b6-854a-67ea1f2842d9\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.754986 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.891086 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:28 crc kubenswrapper[5121]: I0218 00:14:28.999188 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5hnxm"] Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.199494 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svl96"] Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.340854 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-hrxzn"] Feb 18 00:14:29 crc kubenswrapper[5121]: W0218 00:14:29.344693 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39a55eed_2143_45b6_854a_67ea1f2842d9.slice/crio-efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351 WatchSource:0}: Error finding container efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351: Status 404 returned error can't find the container with id efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.423293 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerStarted","Data":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.444861 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" event={"ID":"39a55eed-2143-45b6-854a-67ea1f2842d9","Type":"ContainerStarted","Data":"efdb99eddf92f40fe77223c428c9466f89edb55536c796e52eecac44b6dbb351"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451260 5121 generic.go:358] "Generic (PLEG): container finished" podID="7f3e3949-ddb8-4d79-8063-8e319147d2b5" containerID="152449ca31ab356a7b6f003f28252a7246f5c4bbc0beba6ae0a09d44123d9b19" exitCode=0 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451411 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerDied","Data":"152449ca31ab356a7b6f003f28252a7246f5c4bbc0beba6ae0a09d44123d9b19"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.451493 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"feb3a607f3f56e77975f785301b061cc596d0468819c14a6c8b04a2169eba85f"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.462714 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3bb7195-d543-4fba-bbe3-661b888f6ab3" containerID="7d8e7c7522a172304434cadf1bd36d87f8c6ccabefa90563bf1f0309846201b7" exitCode=0 Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.462975 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerDied","Data":"7d8e7c7522a172304434cadf1bd36d87f8c6ccabefa90563bf1f0309846201b7"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.463025 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"61deae93ec3b56fc3f8a17bd5230306fff8989c7fe8f2d357bd4be4f5ec383a2"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.474707 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m24xj" event={"ID":"17b15350-ab27-4821-bfb5-2ca12b36c32d","Type":"ContainerStarted","Data":"c9239c6c862695cfb680c9192ed0c93fd102bcc5c40085f9d8a062351dc2186e"} Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.485380 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9knfx" podStartSLOduration=3.900433393 podStartE2EDuration="4.485351715s" podCreationTimestamp="2026-02-18 00:14:25 +0000 UTC" firstStartedPulling="2026-02-18 00:14:27.387405325 +0000 UTC m=+350.901863060" lastFinishedPulling="2026-02-18 00:14:27.972323647 +0000 UTC m=+351.486781382" observedRunningTime="2026-02-18 00:14:29.453711224 +0000 UTC m=+352.968168989" watchObservedRunningTime="2026-02-18 00:14:29.485351715 +0000 UTC m=+352.999809580" Feb 18 00:14:29 crc kubenswrapper[5121]: I0218 00:14:29.514013 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m24xj" podStartSLOduration=3.9478544749999998 podStartE2EDuration="4.513986916s" podCreationTimestamp="2026-02-18 00:14:25 +0000 UTC" firstStartedPulling="2026-02-18 00:14:27.381240643 +0000 UTC m=+350.895698388" lastFinishedPulling="2026-02-18 00:14:27.947373094 +0000 UTC m=+351.461830829" observedRunningTime="2026-02-18 00:14:29.507766602 +0000 UTC m=+353.022224347" watchObservedRunningTime="2026-02-18 00:14:29.513986916 +0000 UTC m=+353.028444641" Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.480767 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" event={"ID":"39a55eed-2143-45b6-854a-67ea1f2842d9","Type":"ContainerStarted","Data":"a917b28c3ac6918f67d939637a1892a665b55c12db5d3815ce542162ad2ab7fd"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.481105 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.483348 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.486515 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93"} Feb 18 00:14:30 crc kubenswrapper[5121]: I0218 00:14:30.508066 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" podStartSLOduration=2.50804974 podStartE2EDuration="2.50804974s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:30.507796783 +0000 UTC m=+354.022254518" watchObservedRunningTime="2026-02-18 00:14:30.50804974 +0000 UTC m=+354.022507485" Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.495548 5121 generic.go:358] "Generic (PLEG): container finished" podID="7f3e3949-ddb8-4d79-8063-8e319147d2b5" containerID="418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011" exitCode=0 Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.495634 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerDied","Data":"418ed9fe8facd0443d8e7be89975eaefdac7e602715557b40982aba116c03011"} Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.498986 5121 generic.go:358] "Generic (PLEG): container finished" podID="b3bb7195-d543-4fba-bbe3-661b888f6ab3" containerID="8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93" exitCode=0 Feb 18 00:14:31 crc kubenswrapper[5121]: I0218 00:14:31.499105 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerDied","Data":"8935e9a6a7dd157b879fbccb4cab3defec881d667e5fdaacaf50d1f351228c93"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.508263 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svl96" event={"ID":"7f3e3949-ddb8-4d79-8063-8e319147d2b5","Type":"ContainerStarted","Data":"219a660440b2bf82c64723910432192df9680da2eb2959df9cac5ae85ce60327"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.511303 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5hnxm" event={"ID":"b3bb7195-d543-4fba-bbe3-661b888f6ab3","Type":"ContainerStarted","Data":"d1347696d0690c5d4142655da2be5d681d16ad46135cad36344552f2a69ca6ef"} Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.530717 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svl96" podStartSLOduration=3.84664176 podStartE2EDuration="4.530693574s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="2026-02-18 00:14:29.453090698 +0000 UTC m=+352.967548443" lastFinishedPulling="2026-02-18 00:14:30.137142532 +0000 UTC m=+353.651600257" observedRunningTime="2026-02-18 00:14:32.524501352 +0000 UTC m=+356.038959087" watchObservedRunningTime="2026-02-18 00:14:32.530693574 +0000 UTC m=+356.045151329" Feb 18 00:14:32 crc kubenswrapper[5121]: I0218 00:14:32.552547 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5hnxm" podStartSLOduration=3.898689767 podStartE2EDuration="4.552524027s" podCreationTimestamp="2026-02-18 00:14:28 +0000 UTC" firstStartedPulling="2026-02-18 00:14:29.464149529 +0000 UTC m=+352.978607254" lastFinishedPulling="2026-02-18 00:14:30.117983769 +0000 UTC m=+353.632441514" observedRunningTime="2026-02-18 00:14:32.548698626 +0000 UTC m=+356.063156371" watchObservedRunningTime="2026-02-18 00:14:32.552524027 +0000 UTC m=+356.066981782" Feb 18 00:14:34 crc kubenswrapper[5121]: I0218 00:14:34.803067 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:34 crc kubenswrapper[5121]: I0218 00:14:34.803835 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" containerID="cri-o://dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" gracePeriod=30 Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.326137 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.355330 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.355995 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.356015 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.356122 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerName="route-controller-manager" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.367686 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.379474 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395273 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395324 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395362 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395398 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395569 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") pod \"1792aaaf-7683-495e-9fab-d35daee8eac0\" (UID: \"1792aaaf-7683-495e-9fab-d35daee8eac0\") " Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395786 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395903 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395923 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp" (OuterVolumeSpecName: "tmp") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.395974 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396023 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396044 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396081 5121 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1792aaaf-7683-495e-9fab-d35daee8eac0-tmp\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396348 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config" (OuterVolumeSpecName: "config") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.396612 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca" (OuterVolumeSpecName: "client-ca") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.406821 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh" (OuterVolumeSpecName: "kube-api-access-5nngh") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "kube-api-access-5nngh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.407544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1792aaaf-7683-495e-9fab-d35daee8eac0" (UID: "1792aaaf-7683-495e-9fab-d35daee8eac0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497329 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497390 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497452 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497486 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497518 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497560 5121 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497572 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5nngh\" (UniqueName: \"kubernetes.io/projected/1792aaaf-7683-495e-9fab-d35daee8eac0-kube-api-access-5nngh\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497585 5121 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1792aaaf-7683-495e-9fab-d35daee8eac0-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.497595 5121 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1792aaaf-7683-495e-9fab-d35daee8eac0-client-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498198 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-tmp\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-client-ca\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.498925 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-config\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.502394 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-serving-cert\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.516535 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frzzf\" (UniqueName: \"kubernetes.io/projected/eac52ad9-59fe-4424-9cc6-bfe2d4cd1144-kube-api-access-frzzf\") pod \"route-controller-manager-9997fb9c5-jkk6z\" (UID: \"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144\") " pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544222 5121 generic.go:358] "Generic (PLEG): container finished" podID="1792aaaf-7683-495e-9fab-d35daee8eac0" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" exitCode=0 Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544462 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerDied","Data":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544502 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" event={"ID":"1792aaaf-7683-495e-9fab-d35daee8eac0","Type":"ContainerDied","Data":"2cc1e3e5873f4c5804dd14921c8b55fa72b3e555cb49d6a181160f170c6870dc"} Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544524 5121 scope.go:117] "RemoveContainer" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.544789 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c6447df94-58994" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.575329 5121 scope.go:117] "RemoveContainer" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: E0218 00:14:35.577400 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": container with ID starting with dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e not found: ID does not exist" containerID="dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.577485 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e"} err="failed to get container status \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": rpc error: code = NotFound desc = could not find container \"dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e\": container with ID starting with dfacbbd603a86b4b562e49ad20bdcfda63cb5dc5a914b960b02ba6829a66e57e not found: ID does not exist" Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.587849 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.592319 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c6447df94-58994"] Feb 18 00:14:35 crc kubenswrapper[5121]: I0218 00:14:35.688873 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.115145 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.115193 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.123721 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z"] Feb 18 00:14:36 crc kubenswrapper[5121]: W0218 00:14:36.134768 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeac52ad9_59fe_4424_9cc6_bfe2d4cd1144.slice/crio-065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956 WatchSource:0}: Error finding container 065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956: Status 404 returned error can't find the container with id 065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956 Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.176246 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.315731 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.315801 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.375132 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.552844 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" event={"ID":"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144","Type":"ContainerStarted","Data":"6b11c5c10374c655462753d61a0a2c359f0c9ebc0780b630186446a76633286b"} Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.553694 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.553800 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" event={"ID":"eac52ad9-59fe-4424-9cc6-bfe2d4cd1144","Type":"ContainerStarted","Data":"065a8e1c3e248197d280c84fe24b3e874c60c44c293c29e9128b8096cf140956"} Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.574267 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" podStartSLOduration=2.5742408770000003 podStartE2EDuration="2.574240877s" podCreationTimestamp="2026-02-18 00:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:14:36.569228005 +0000 UTC m=+360.083685740" watchObservedRunningTime="2026-02-18 00:14:36.574240877 +0000 UTC m=+360.088698622" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.607793 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m24xj" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.614895 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:14:36 crc kubenswrapper[5121]: I0218 00:14:36.999346 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9997fb9c5-jkk6z" Feb 18 00:14:37 crc kubenswrapper[5121]: I0218 00:14:37.280178 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1792aaaf-7683-495e-9fab-d35daee8eac0" path="/var/lib/kubelet/pods/1792aaaf-7683-495e-9fab-d35daee8eac0/volumes" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.513713 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.514419 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.560185 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.615392 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5hnxm" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.756405 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.756537 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:38 crc kubenswrapper[5121]: I0218 00:14:38.809924 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:39 crc kubenswrapper[5121]: I0218 00:14:39.643714 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svl96" Feb 18 00:14:51 crc kubenswrapper[5121]: I0218 00:14:51.506980 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-hrxzn" Feb 18 00:14:51 crc kubenswrapper[5121]: I0218 00:14:51.573100 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.159886 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.175596 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.175854 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.181493 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.181838 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.334618 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.335060 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.335261 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.437383 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.438020 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.438415 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.440187 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.450879 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.467995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"collect-profiles-29522895-2mkqz\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:00 crc kubenswrapper[5121]: I0218 00:15:00.552831 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.051766 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz"] Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732249 5121 generic.go:358] "Generic (PLEG): container finished" podID="a4615074-d315-44d4-99e1-61ad71c1e230" containerID="326a0ee967a9b64ba24c4fe4634a35ca941d8d900a768c26a5a4e78f169bba26" exitCode=0 Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732380 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerDied","Data":"326a0ee967a9b64ba24c4fe4634a35ca941d8d900a768c26a5a4e78f169bba26"} Feb 18 00:15:01 crc kubenswrapper[5121]: I0218 00:15:01.732845 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerStarted","Data":"860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43"} Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.145132 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287159 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287518 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.287617 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") pod \"a4615074-d315-44d4-99e1-61ad71c1e230\" (UID: \"a4615074-d315-44d4-99e1-61ad71c1e230\") " Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.288326 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume" (OuterVolumeSpecName: "config-volume") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.295275 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l" (OuterVolumeSpecName: "kube-api-access-28c8l") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "kube-api-access-28c8l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.299855 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a4615074-d315-44d4-99e1-61ad71c1e230" (UID: "a4615074-d315-44d4-99e1-61ad71c1e230"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389230 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4615074-d315-44d4-99e1-61ad71c1e230-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389288 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a4615074-d315-44d4-99e1-61ad71c1e230-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.389307 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-28c8l\" (UniqueName: \"kubernetes.io/projected/a4615074-d315-44d4-99e1-61ad71c1e230-kube-api-access-28c8l\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.751349 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" event={"ID":"a4615074-d315-44d4-99e1-61ad71c1e230","Type":"ContainerDied","Data":"860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43"} Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.752102 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860fcbddc2838abec242e036f52b9da1edb2cef24994c66a3b8170eb8ca8aa43" Feb 18 00:15:03 crc kubenswrapper[5121]: I0218 00:15:03.751415 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522895-2mkqz" Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.617992 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" containerID="cri-o://3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" gracePeriod=30 Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.870697 5121 generic.go:358] "Generic (PLEG): container finished" podID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerID="3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" exitCode=0 Feb 18 00:15:16 crc kubenswrapper[5121]: I0218 00:15:16.870776 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerDied","Data":"3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521"} Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.100317 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.221972 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222033 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222236 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222493 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.222517 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.223278 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.223371 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") pod \"7147ca0c-09b0-4078-8e66-4d589f54c85a\" (UID: \"7147ca0c-09b0-4078-8e66-4d589f54c85a\") " Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.224500 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.224944 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.231459 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.231496 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh" (OuterVolumeSpecName: "kube-api-access-phphh") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "kube-api-access-phphh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.232029 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.233876 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.241983 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.254208 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "7147ca0c-09b0-4078-8e66-4d589f54c85a" (UID: "7147ca0c-09b0-4078-8e66-4d589f54c85a"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326154 5121 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326218 5121 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/7147ca0c-09b0-4078-8e66-4d589f54c85a-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326241 5121 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326259 5121 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326283 5121 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/7147ca0c-09b0-4078-8e66-4d589f54c85a-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326300 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phphh\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-kube-api-access-phphh\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.326318 5121 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/7147ca0c-09b0-4078-8e66-4d589f54c85a-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881502 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881499 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-8g5jp" event={"ID":"7147ca0c-09b0-4078-8e66-4d589f54c85a","Type":"ContainerDied","Data":"51cf34af5f3e60547305a8dcaaf837202c7932c821c7bc1d4c4374385f24b01a"} Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.881700 5121 scope.go:117] "RemoveContainer" containerID="3f1dcd1be364fba705dc37d8d5a56c1ce77e7516c315dc01cdaf7dd2de0f8521" Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.905369 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:17 crc kubenswrapper[5121]: I0218 00:15:17.914499 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-8g5jp"] Feb 18 00:15:19 crc kubenswrapper[5121]: I0218 00:15:19.278788 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" path="/var/lib/kubelet/pods/7147ca0c-09b0-4078-8e66-4d589f54c85a/volumes" Feb 18 00:15:34 crc kubenswrapper[5121]: I0218 00:15:34.545427 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:15:34 crc kubenswrapper[5121]: I0218 00:15:34.546372 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.147132 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149176 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149228 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149278 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149296 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149522 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4615074-d315-44d4-99e1-61ad71c1e230" containerName="collect-profiles" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.149561 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="7147ca0c-09b0-4078-8e66-4d589f54c85a" containerName="registry" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.200794 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.200883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.211523 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.211944 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.212152 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.225706 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.327566 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.372894 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"auto-csr-approver-29522896-wgmcl\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.527139 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:00 crc kubenswrapper[5121]: I0218 00:16:00.974633 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:16:01 crc kubenswrapper[5121]: I0218 00:16:01.198028 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerStarted","Data":"fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45"} Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.231757 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerStarted","Data":"07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7"} Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.258042 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" podStartSLOduration=1.433283509 podStartE2EDuration="4.258015634s" podCreationTimestamp="2026-02-18 00:16:00 +0000 UTC" firstStartedPulling="2026-02-18 00:16:00.991947341 +0000 UTC m=+444.506405106" lastFinishedPulling="2026-02-18 00:16:03.816679496 +0000 UTC m=+447.331137231" observedRunningTime="2026-02-18 00:16:04.249326835 +0000 UTC m=+447.763784610" watchObservedRunningTime="2026-02-18 00:16:04.258015634 +0000 UTC m=+447.772473399" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.545054 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.545231 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.684206 5121 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-gkz29" Feb 18 00:16:04 crc kubenswrapper[5121]: I0218 00:16:04.712983 5121 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-gkz29" Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.248974 5121 generic.go:358] "Generic (PLEG): container finished" podID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerID="07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7" exitCode=0 Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.249194 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerDied","Data":"07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7"} Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.715321 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-20 00:11:04 +0000 UTC" deadline="2026-03-14 10:26:08.483489612 +0000 UTC" Feb 18 00:16:05 crc kubenswrapper[5121]: I0218 00:16:05.715474 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="586h10m2.76802121s" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.632031 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.716338 5121 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-20 00:11:04 +0000 UTC" deadline="2026-03-14 18:16:31.392275886 +0000 UTC" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.716383 5121 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="594h0m24.675896667s" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.721089 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") pod \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\" (UID: \"17bd0236-52ea-4369-9891-8cf9e1dcff2b\") " Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.732357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg" (OuterVolumeSpecName: "kube-api-access-6kfwg") pod "17bd0236-52ea-4369-9891-8cf9e1dcff2b" (UID: "17bd0236-52ea-4369-9891-8cf9e1dcff2b"). InnerVolumeSpecName "kube-api-access-6kfwg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:16:06 crc kubenswrapper[5121]: I0218 00:16:06.822736 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6kfwg\" (UniqueName: \"kubernetes.io/projected/17bd0236-52ea-4369-9891-8cf9e1dcff2b-kube-api-access-6kfwg\") on node \"crc\" DevicePath \"\"" Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265543 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" event={"ID":"17bd0236-52ea-4369-9891-8cf9e1dcff2b","Type":"ContainerDied","Data":"fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45"} Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265615 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd358c72531020d4e2aa5a6742bae362193df5455e9f796d5de49fb1bf73cb45" Feb 18 00:16:07 crc kubenswrapper[5121]: I0218 00:16:07.265815 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522896-wgmcl" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.544602 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546001 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546075 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.546973 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:16:34 crc kubenswrapper[5121]: I0218 00:16:34.547062 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" gracePeriod=600 Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.475895 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" exitCode=0 Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476026 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9"} Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476780 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} Feb 18 00:16:35 crc kubenswrapper[5121]: I0218 00:16:35.476823 5121 scope.go:117] "RemoveContainer" containerID="f39743e1fe1af60126dfcbfc9a8ab370a7d9715a829083d3e64b0b59ec23ba97" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.148149 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.149933 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.149964 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.150176 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" containerName="oc" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.207447 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.207578 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210242 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210585 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.210589 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.267800 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.368710 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.397519 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"auto-csr-approver-29522898-b8lhd\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:00 crc kubenswrapper[5121]: I0218 00:18:00.529536 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:01 crc kubenswrapper[5121]: I0218 00:18:01.037371 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:18:01 crc kubenswrapper[5121]: I0218 00:18:01.080508 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerStarted","Data":"0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5"} Feb 18 00:18:03 crc kubenswrapper[5121]: I0218 00:18:03.096260 5121 generic.go:358] "Generic (PLEG): container finished" podID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerID="6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea" exitCode=0 Feb 18 00:18:03 crc kubenswrapper[5121]: I0218 00:18:03.096357 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerDied","Data":"6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea"} Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.391307 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.535354 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") pod \"0752b905-c20c-4af0-a716-b5297e9ed6fc\" (UID: \"0752b905-c20c-4af0-a716-b5297e9ed6fc\") " Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.542916 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl" (OuterVolumeSpecName: "kube-api-access-trmsl") pod "0752b905-c20c-4af0-a716-b5297e9ed6fc" (UID: "0752b905-c20c-4af0-a716-b5297e9ed6fc"). InnerVolumeSpecName "kube-api-access-trmsl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:04 crc kubenswrapper[5121]: I0218 00:18:04.637131 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trmsl\" (UniqueName: \"kubernetes.io/projected/0752b905-c20c-4af0-a716-b5297e9ed6fc-kube-api-access-trmsl\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110325 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" event={"ID":"0752b905-c20c-4af0-a716-b5297e9ed6fc","Type":"ContainerDied","Data":"0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5"} Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110623 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0876be5be9269e988c96245f0476e8e24748abae87f360427db8bf7e2f6d0df5" Feb 18 00:18:05 crc kubenswrapper[5121]: I0218 00:18:05.110376 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522898-b8lhd" Feb 18 00:18:34 crc kubenswrapper[5121]: I0218 00:18:34.545516 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:18:34 crc kubenswrapper[5121]: I0218 00:18:34.546224 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:18:37 crc kubenswrapper[5121]: I0218 00:18:37.543989 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:18:37 crc kubenswrapper[5121]: I0218 00:18:37.544957 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.549756 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.550626 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" containerID="cri-o://74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.550742 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" containerID="cri-o://07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.709900 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710583 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" containerID="cri-o://28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710676 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" containerID="cri-o://7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710725 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" containerID="cri-o://96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710739 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" containerID="cri-o://74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710789 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710777 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" containerID="cri-o://a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.710729 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" containerID="cri-o://d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.751015 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" containerID="cri-o://79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" gracePeriod=30 Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.806992 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.847071 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986"] Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848716 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848747 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848763 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848772 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848796 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848803 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848930 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" containerName="oc" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848944 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="ovnkube-cluster-manager" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.848957 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerName="kube-rbac-proxy" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.855261 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.929847 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930000 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930030 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.930150 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") pod \"aa9cd074-60f6-4754-9ef8-567f9274e384\" (UID: \"aa9cd074-60f6-4754-9ef8-567f9274e384\") " Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932022 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932054 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932138 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932266 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932297 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932377 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.932395 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/aa9cd074-60f6-4754-9ef8-567f9274e384-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.938396 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r" (OuterVolumeSpecName: "kube-api-access-rmw8r") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "kube-api-access-rmw8r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:56 crc kubenswrapper[5121]: I0218 00:18:56.938874 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "aa9cd074-60f6-4754-9ef8-567f9274e384" (UID: "aa9cd074-60f6-4754-9ef8-567f9274e384"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033697 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033864 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033942 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.033995 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034153 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rmw8r\" (UniqueName: \"kubernetes.io/projected/aa9cd074-60f6-4754-9ef8-567f9274e384-kube-api-access-rmw8r\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034189 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/aa9cd074-60f6-4754-9ef8-567f9274e384-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034823 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.034931 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/703db6d8-e584-4bdc-ad21-8a159643b2cf-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.039066 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/703db6d8-e584-4bdc-ad21-8a159643b2cf-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.052841 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxcmt\" (UniqueName: \"kubernetes.io/projected/703db6d8-e584-4bdc-ad21-8a159643b2cf-kube-api-access-cxcmt\") pod \"ovnkube-control-plane-97c9b6c48-7m986\" (UID: \"703db6d8-e584-4bdc-ad21-8a159643b2cf\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.080129 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-acl-logging/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.080596 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-controller/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.081238 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134767 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134851 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134886 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134936 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134946 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.134963 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135098 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135127 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135151 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135176 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135211 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135286 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135312 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135369 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135403 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135442 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135473 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135528 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135562 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135591 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135630 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") pod \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\" (UID: \"0ec6f87b-86e0-4893-9709-9dc7381bc95a\") " Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.135980 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket" (OuterVolumeSpecName: "log-socket") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136075 5121 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136096 5121 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-log-socket\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136128 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136156 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136142 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136220 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136502 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136544 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136593 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash" (OuterVolumeSpecName: "host-slash") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136614 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136639 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136674 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136692 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136711 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log" (OuterVolumeSpecName: "node-log") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136735 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.136776 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.137004 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.138435 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.139558 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l" (OuterVolumeSpecName: "kube-api-access-xfl5l") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "kube-api-access-xfl5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.144358 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zvj44"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145282 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145318 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145341 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145352 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145389 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kubecfg-setup" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145401 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kubecfg-setup" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145416 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145428 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145441 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145453 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145469 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145480 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145497 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145508 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145524 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145535 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145558 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145570 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145738 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="sbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145763 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="nbdb" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145776 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="northd" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145789 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-ovn-metrics" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145807 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-acl-logging" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145825 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovn-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145846 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="ovnkube-controller" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.145861 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerName="kube-rbac-proxy-node" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.147127 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "0ec6f87b-86e0-4893-9709-9dc7381bc95a" (UID: "0ec6f87b-86e0-4893-9709-9dc7381bc95a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.164092 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.187927 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" Feb 18 00:18:57 crc kubenswrapper[5121]: W0218 00:18:57.216799 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod703db6d8_e584_4bdc_ad21_8a159643b2cf.slice/crio-e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9 WatchSource:0}: Error finding container e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9: Status 404 returned error can't find the container with id e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238091 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238250 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238292 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238326 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238486 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238559 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238632 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.238714 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239229 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239301 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239353 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239395 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239438 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239509 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239582 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239625 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239803 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.239878 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240027 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240046 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240070 5121 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240079 5121 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240089 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240096 5121 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240107 5121 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240116 5121 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240125 5121 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240149 5121 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-node-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240159 5121 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240167 5121 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240175 5121 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240183 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfl5l\" (UniqueName: \"kubernetes.io/projected/0ec6f87b-86e0-4893-9709-9dc7381bc95a-kube-api-access-xfl5l\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240193 5121 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-slash\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240203 5121 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240211 5121 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0ec6f87b-86e0-4893-9709-9dc7381bc95a-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.240220 5121 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0ec6f87b-86e0-4893-9709-9dc7381bc95a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341324 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341378 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341401 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341418 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341433 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341456 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341481 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341509 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-var-lib-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341552 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-systemd-units\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341579 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341601 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-node-log\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341597 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-bin\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341660 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341626 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341604 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341741 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341798 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341858 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341926 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342022 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342006 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-log-socket\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342114 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-etc-openvswitch\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342122 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.341582 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-slash\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342182 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-run-netns\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342064 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-cni-netd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342353 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342407 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-ovn\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342482 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342523 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-host-kubelet\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342455 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/0e11ae91-1d70-4646-8a77-13e95651cf36-run-systemd\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-script-lib\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342581 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.342976 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-env-overrides\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.343410 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/0e11ae91-1d70-4646-8a77-13e95651cf36-ovnkube-config\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.344728 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0e11ae91-1d70-4646-8a77-13e95651cf36-ovn-node-metrics-cert\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.366460 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqpkb\" (UniqueName: \"kubernetes.io/projected/0e11ae91-1d70-4646-8a77-13e95651cf36-kube-api-access-jqpkb\") pod \"ovnkube-node-zvj44\" (UID: \"0e11ae91-1d70-4646-8a77-13e95651cf36\") " pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.491232 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521822 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521903 5121 generic.go:358] "Generic (PLEG): container finished" podID="51dcc4ed-63a2-4a92-936e-8ef22eca20d6" containerID="5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299" exitCode=2 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.521969 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerDied","Data":"5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.524187 5121 scope.go:117] "RemoveContainer" containerID="5afa9905764b3ba486f1dce200780b7bf8afb653e42c02f34fe03646732d3299" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.532815 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-acl-logging/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.534632 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7tprw_0ec6f87b-86e0-4893-9709-9dc7381bc95a/ovn-controller/0.log" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536636 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536688 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536704 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536715 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536725 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536735 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536746 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" exitCode=143 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536735 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536853 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536861 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536885 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536917 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536948 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536974 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536995 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537010 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537031 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537053 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537070 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537087 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537101 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537116 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537130 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537146 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537159 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537176 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537225 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537242 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537256 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537270 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537283 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537297 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537311 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537325 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537340 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537371 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.536757 5121 generic.go:358] "Generic (PLEG): container finished" podID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" exitCode=143 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537639 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7tprw" event={"ID":"0ec6f87b-86e0-4893-9709-9dc7381bc95a","Type":"ContainerDied","Data":"8247d6c91314685e7acd9d477934ca2db261dd3d8ba947e08a5dfa54657f7047"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537719 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537739 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537755 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537772 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537787 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537802 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537818 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537833 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.537847 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546168 5121 generic.go:358] "Generic (PLEG): container finished" podID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerID="07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546205 5121 generic.go:358] "Generic (PLEG): container finished" podID="aa9cd074-60f6-4754-9ef8-567f9274e384" containerID="74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" exitCode=0 Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546311 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546693 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546789 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546873 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546940 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546965 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.546977 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547035 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g" event={"ID":"aa9cd074-60f6-4754-9ef8-567f9274e384","Type":"ContainerDied","Data":"3f602af0b907d579f8bad5e82ee216caa9af1e2c69102abc29f1afb596215540"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547047 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.547059 5121 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.563459 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"e84270e5a2567515006ccafc0f0bf720feeed84968dc1986bb2defeb185b14b9"} Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.580864 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.581423 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.585400 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7tprw"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.610813 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.610860 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-rfj5g"] Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.611004 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.638239 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.660347 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.678822 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.814295 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.836500 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.870600 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.899876 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.900439 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900467 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900488 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.900958 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.900982 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901000 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901252 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901280 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901291 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901530 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901576 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901593 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.901963 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901981 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.901993 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.902461 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.902483 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.902497 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.902994 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903011 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903023 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.903615 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903636 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.903669 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: E0218 00:18:57.904516 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.904582 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.904623 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.905996 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906034 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906694 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.906730 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907081 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907107 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907571 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.907601 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908035 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908069 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908472 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.908500 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.921774 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.921831 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922396 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922484 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922878 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.922921 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923171 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923193 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923429 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923448 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923639 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923716 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923918 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.923935 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924133 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924150 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924377 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.924402 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.925747 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.925786 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.926467 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.926523 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927009 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927036 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927327 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927364 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927698 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927726 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927963 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.927989 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928216 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928255 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928444 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928465 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928734 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928760 5121 scope.go:117] "RemoveContainer" containerID="74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928981 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872"} err="failed to get container status \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": rpc error: code = NotFound desc = could not find container \"74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872\": container with ID starting with 74d5fc25b69a860705d51d92953b236c8b4b3fbb23b86d8d070dea56064b2872 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.928999 5121 scope.go:117] "RemoveContainer" containerID="28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929200 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea"} err="failed to get container status \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": rpc error: code = NotFound desc = could not find container \"28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea\": container with ID starting with 28c2a0dc2c5166b8ecf4729c0183ba5da8fc2ff3695e036dff001584289502ea not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929216 5121 scope.go:117] "RemoveContainer" containerID="9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929470 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0"} err="failed to get container status \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": rpc error: code = NotFound desc = could not find container \"9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0\": container with ID starting with 9f615409439ed5d81ca6b71b1415c40814512247681ca92c19b8ef43098e43d0 not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.929491 5121 scope.go:117] "RemoveContainer" containerID="79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930026 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe"} err="failed to get container status \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": rpc error: code = NotFound desc = could not find container \"79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe\": container with ID starting with 79b5b145fa4d871b3a98d4856651c9f9eb689039a367394a375b866c9fc92cbe not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930075 5121 scope.go:117] "RemoveContainer" containerID="96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930358 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c"} err="failed to get container status \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": rpc error: code = NotFound desc = could not find container \"96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c\": container with ID starting with 96f8700313adf263c014b9298f7fa957f3b4758c89e4fdfe1c9f038b80572c5c not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930386 5121 scope.go:117] "RemoveContainer" containerID="a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930754 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d"} err="failed to get container status \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": rpc error: code = NotFound desc = could not find container \"a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d\": container with ID starting with a77a1fabcdbea0d3dad444825a1cc336de50bef4c543cfbc7c12400ef467405d not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.930822 5121 scope.go:117] "RemoveContainer" containerID="d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931073 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e"} err="failed to get container status \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": rpc error: code = NotFound desc = could not find container \"d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e\": container with ID starting with d1afe85bd4be949029304036a0fba8c09da273e4b65d1b3ad606faa512afb87e not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931094 5121 scope.go:117] "RemoveContainer" containerID="dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931315 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db"} err="failed to get container status \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": rpc error: code = NotFound desc = could not find container \"dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db\": container with ID starting with dc713cf94a161d4a0eaa19928d0aa5c1ab4b95d1b209e699aa82ad2615b544db not found: ID does not exist" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931354 5121 scope.go:117] "RemoveContainer" containerID="7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a" Feb 18 00:18:57 crc kubenswrapper[5121]: I0218 00:18:57.931568 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a"} err="failed to get container status \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": rpc error: code = NotFound desc = could not find container \"7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a\": container with ID starting with 7742b3bbd6159a30ab29fe31f9c8d43269dce649e5ef900362926ad2debf6e8a not found: ID does not exist" Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575137 5121 generic.go:358] "Generic (PLEG): container finished" podID="0e11ae91-1d70-4646-8a77-13e95651cf36" containerID="f924400afe73fc2ebb7c7d384ff314b8d1c82b7210d0334263517b991dc5d61b" exitCode=0 Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575421 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerDied","Data":"f924400afe73fc2ebb7c7d384ff314b8d1c82b7210d0334263517b991dc5d61b"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.575459 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"f27f501182976119f1afa288a92ebcc5f23a554452521ac4737ee868c17ac686"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.584566 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"1fb339aa8ef13951f91b65e9b4bd719830b36c604b32f5452a201fe333fb18d8"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.584723 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" event={"ID":"703db6d8-e584-4bdc-ad21-8a159643b2cf","Type":"ContainerStarted","Data":"6f5fc4bafc017b43c626b2bed1107d2648ceef1f468ab846a2503e4c89c23a77"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.591214 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.591419 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9dxsb" event={"ID":"51dcc4ed-63a2-4a92-936e-8ef22eca20d6","Type":"ContainerStarted","Data":"325ca769f8b12afd18cac46fed98d6343a14a622a72b47e474f86387625e75d2"} Feb 18 00:18:58 crc kubenswrapper[5121]: I0218 00:18:58.667302 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-7m986" podStartSLOduration=2.6672693929999998 podStartE2EDuration="2.667269393s" podCreationTimestamp="2026-02-18 00:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:18:58.654360694 +0000 UTC m=+622.168818519" watchObservedRunningTime="2026-02-18 00:18:58.667269393 +0000 UTC m=+622.181727178" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.279487 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec6f87b-86e0-4893-9709-9dc7381bc95a" path="/var/lib/kubelet/pods/0ec6f87b-86e0-4893-9709-9dc7381bc95a/volumes" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.280640 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa9cd074-60f6-4754-9ef8-567f9274e384" path="/var/lib/kubelet/pods/aa9cd074-60f6-4754-9ef8-567f9274e384/volumes" Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.603836 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"7c19e98acf0cb7662d53e9c18b19bb018020832f148536f78f00ab76b113b2ce"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604166 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"6d01a9a7e5cc0ab6888ee37f85cd72b3a540781bb3e80d442dd34d6790703929"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604184 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"31227a932831b4dffcb475ad97571a9a55c275a87a2e61268c08fcb0125a89bb"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604199 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"d38d6e7946f39b34b8d755d85b6d3020fdfa433b371f8478185c6bc94d40b354"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604210 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"05cef983d9f92e2521302b5c44899045032387b00bd6df89cb5d8d49897a2dfb"} Feb 18 00:18:59 crc kubenswrapper[5121]: I0218 00:18:59.604221 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"82f5dae6c4143c9534bc285bfd8c2c7da0dcd1723a78273943179cf67afec0bf"} Feb 18 00:19:02 crc kubenswrapper[5121]: I0218 00:19:02.631283 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"269a4106dc7d73614339bd6b8c2e9277c5c88b77aa292962acec0cefdd9c8bc3"} Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.545221 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.545594 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.655197 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" event={"ID":"0e11ae91-1d70-4646-8a77-13e95651cf36","Type":"ContainerStarted","Data":"6327744c97df40c767a9dc8867f99340483329f333dcd2ef9579fdc6bc67d69a"} Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656370 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656416 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.656430 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.701562 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" podStartSLOduration=7.7015418 podStartE2EDuration="7.7015418s" podCreationTimestamp="2026-02-18 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:19:04.698734424 +0000 UTC m=+628.213192179" watchObservedRunningTime="2026-02-18 00:19:04.7015418 +0000 UTC m=+628.215999535" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.705162 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:04 crc kubenswrapper[5121]: I0218 00:19:04.722054 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545287 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545810 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.545860 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.546530 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.546604 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" gracePeriod=600 Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.742423 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.884821 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" exitCode=0 Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.884936 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44"} Feb 18 00:19:34 crc kubenswrapper[5121]: I0218 00:19:34.885373 5121 scope.go:117] "RemoveContainer" containerID="71b6871ef3c80016f97d146d25362805bcfe3182f1291d088e3b569d2cd81ca9" Feb 18 00:19:35 crc kubenswrapper[5121]: I0218 00:19:35.894706 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} Feb 18 00:19:36 crc kubenswrapper[5121]: I0218 00:19:36.693407 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zvj44" Feb 18 00:19:37 crc kubenswrapper[5121]: I0218 00:19:37.696611 5121 scope.go:117] "RemoveContainer" containerID="07b4772c2602825881eaa061e06260118b18d01c3f5f4da687f9c9bc6923bcb5" Feb 18 00:19:37 crc kubenswrapper[5121]: I0218 00:19:37.730059 5121 scope.go:117] "RemoveContainer" containerID="74d12aeb72b6955c1e2a2b332c417b6ba1c0255b18c1a07fb22751b59e6d323e" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.149679 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.161155 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.161383 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164319 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164637 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.164721 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.287913 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.389199 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.425186 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"auto-csr-approver-29522900-85n6k\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.503030 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:00 crc kubenswrapper[5121]: I0218 00:20:00.767524 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:20:00 crc kubenswrapper[5121]: W0218 00:20:00.778943 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d8c4383_cf7d_4c99_badf_42f433b91870.slice/crio-14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db WatchSource:0}: Error finding container 14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db: Status 404 returned error can't find the container with id 14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db Feb 18 00:20:01 crc kubenswrapper[5121]: I0218 00:20:01.075338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerStarted","Data":"14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db"} Feb 18 00:20:03 crc kubenswrapper[5121]: I0218 00:20:03.094561 5121 generic.go:358] "Generic (PLEG): container finished" podID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerID="2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618" exitCode=0 Feb 18 00:20:03 crc kubenswrapper[5121]: I0218 00:20:03.094641 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerDied","Data":"2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618"} Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.439640 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.549140 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") pod \"6d8c4383-cf7d-4c99-badf-42f433b91870\" (UID: \"6d8c4383-cf7d-4c99-badf-42f433b91870\") " Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.555258 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb" (OuterVolumeSpecName: "kube-api-access-2v6hb") pod "6d8c4383-cf7d-4c99-badf-42f433b91870" (UID: "6d8c4383-cf7d-4c99-badf-42f433b91870"). InnerVolumeSpecName "kube-api-access-2v6hb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:04 crc kubenswrapper[5121]: I0218 00:20:04.651847 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2v6hb\" (UniqueName: \"kubernetes.io/projected/6d8c4383-cf7d-4c99-badf-42f433b91870-kube-api-access-2v6hb\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111560 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522900-85n6k" Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111574 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522900-85n6k" event={"ID":"6d8c4383-cf7d-4c99-badf-42f433b91870","Type":"ContainerDied","Data":"14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db"} Feb 18 00:20:05 crc kubenswrapper[5121]: I0218 00:20:05.111992 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14c6711afb4234d8cac94f53880ff49d2b39087a36c3d9a4b9f217272be614db" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.073312 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.074039 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9knfx" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" containerID="cri-o://7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" gracePeriod=30 Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.504287 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.579923 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.580681 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.580971 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") pod \"c9e0e10c-e462-4d05-9e54-25f1527555c1\" (UID: \"c9e0e10c-e462-4d05-9e54-25f1527555c1\") " Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.582010 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities" (OuterVolumeSpecName: "utilities") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.586463 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg" (OuterVolumeSpecName: "kube-api-access-vzcbg") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "kube-api-access-vzcbg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.600480 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c9e0e10c-e462-4d05-9e54-25f1527555c1" (UID: "c9e0e10c-e462-4d05-9e54-25f1527555c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682683 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682742 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vzcbg\" (UniqueName: \"kubernetes.io/projected/c9e0e10c-e462-4d05-9e54-25f1527555c1-kube-api-access-vzcbg\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:06 crc kubenswrapper[5121]: I0218 00:20:06.682761 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c9e0e10c-e462-4d05-9e54-25f1527555c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.126882 5121 generic.go:358] "Generic (PLEG): container finished" podID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" exitCode=0 Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.126958 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9knfx" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127314 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127373 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9knfx" event={"ID":"c9e0e10c-e462-4d05-9e54-25f1527555c1","Type":"ContainerDied","Data":"02d27ed8cf93394976ad9f8bc6796fe0b258dd63ddf991109944863c08a856d1"} Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.127412 5121 scope.go:117] "RemoveContainer" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.149130 5121 scope.go:117] "RemoveContainer" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.167452 5121 scope.go:117] "RemoveContainer" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.179445 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.183692 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9knfx"] Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.202873 5121 scope.go:117] "RemoveContainer" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.203326 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": container with ID starting with 7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825 not found: ID does not exist" containerID="7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.203402 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825"} err="failed to get container status \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": rpc error: code = NotFound desc = could not find container \"7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825\": container with ID starting with 7c88a021e28a22ed7c555cbc2a13f610644f92c68920f8bb2b1079e053435825 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.203481 5121 scope.go:117] "RemoveContainer" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.204091 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": container with ID starting with 186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1 not found: ID does not exist" containerID="186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204172 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1"} err="failed to get container status \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": rpc error: code = NotFound desc = could not find container \"186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1\": container with ID starting with 186e7bab42fc75bcbf5c531dd4833170e85687574cc4b3e5b163a44af0d40ed1 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204235 5121 scope.go:117] "RemoveContainer" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: E0218 00:20:07.204483 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": container with ID starting with 69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18 not found: ID does not exist" containerID="69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.204556 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18"} err="failed to get container status \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": rpc error: code = NotFound desc = could not find container \"69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18\": container with ID starting with 69485afbe581b9b8326aa7b7164ce256290d242de3f8edf94f3186175451ae18 not found: ID does not exist" Feb 18 00:20:07 crc kubenswrapper[5121]: I0218 00:20:07.279237 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" path="/var/lib/kubelet/pods/c9e0e10c-e462-4d05-9e54-25f1527555c1/volumes" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.751379 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753167 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753203 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753236 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-content" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753251 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-content" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753307 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753323 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753366 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-utilities" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753379 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="extract-utilities" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753539 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c9e0e10c-e462-4d05-9e54-25f1527555c1" containerName="registry-server" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.753563 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" containerName="oc" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.787494 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.787773 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.793293 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853176 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853236 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.853279 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954206 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954301 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954354 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954900 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.954995 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:09 crc kubenswrapper[5121]: I0218 00:20:09.972193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:10 crc kubenswrapper[5121]: I0218 00:20:10.105688 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:10 crc kubenswrapper[5121]: I0218 00:20:10.337043 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s"] Feb 18 00:20:10 crc kubenswrapper[5121]: W0218 00:20:10.350236 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda138e59c_43ff_4154_897a_b070bedb8045.slice/crio-2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53 WatchSource:0}: Error finding container 2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53: Status 404 returned error can't find the container with id 2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53 Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159100 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="9a5eee9995db7f0af9d80b87e08b08557dda91ed5b0fe76101170f0cfde01214" exitCode=0 Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159240 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"9a5eee9995db7f0af9d80b87e08b08557dda91ed5b0fe76101170f0cfde01214"} Feb 18 00:20:11 crc kubenswrapper[5121]: I0218 00:20:11.159420 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerStarted","Data":"2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53"} Feb 18 00:20:12 crc kubenswrapper[5121]: I0218 00:20:12.170559 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerStarted","Data":"2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053"} Feb 18 00:20:13 crc kubenswrapper[5121]: I0218 00:20:13.179141 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053" exitCode=0 Feb 18 00:20:13 crc kubenswrapper[5121]: I0218 00:20:13.179253 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"2984588166a413887ae7b2b8448e75c7a0eea7babbce39a3e93d04d69c3d0053"} Feb 18 00:20:14 crc kubenswrapper[5121]: I0218 00:20:14.191483 5121 generic.go:358] "Generic (PLEG): container finished" podID="a138e59c-43ff-4154-897a-b070bedb8045" containerID="527d085d9cc5428e5e01712b0b101d458a0f2eb267c0f9c71540bd370158259b" exitCode=0 Feb 18 00:20:14 crc kubenswrapper[5121]: I0218 00:20:14.191687 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"527d085d9cc5428e5e01712b0b101d458a0f2eb267c0f9c71540bd370158259b"} Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.507513 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.631880 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.631945 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.632198 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") pod \"a138e59c-43ff-4154-897a-b070bedb8045\" (UID: \"a138e59c-43ff-4154-897a-b070bedb8045\") " Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.634903 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle" (OuterVolumeSpecName: "bundle") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.639804 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p" (OuterVolumeSpecName: "kube-api-access-6l27p") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "kube-api-access-6l27p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.734642 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.734745 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6l27p\" (UniqueName: \"kubernetes.io/projected/a138e59c-43ff-4154-897a-b070bedb8045-kube-api-access-6l27p\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.899704 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util" (OuterVolumeSpecName: "util") pod "a138e59c-43ff-4154-897a-b070bedb8045" (UID: "a138e59c-43ff-4154-897a-b070bedb8045"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:15 crc kubenswrapper[5121]: I0218 00:20:15.937122 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a138e59c-43ff-4154-897a-b070bedb8045-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209462 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209494 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s" event={"ID":"a138e59c-43ff-4154-897a-b070bedb8045","Type":"ContainerDied","Data":"2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53"} Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.209555 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d30a83bfc74be3b18e6e417b82ff31daad1d5e43f43e017d01ebb05358e4a53" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.754137 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756168 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="pull" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756347 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="pull" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756459 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="util" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756553 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="util" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756683 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.756783 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.757024 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a138e59c-43ff-4154-897a-b070bedb8045" containerName="extract" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.769731 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.770068 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.773339 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849350 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849529 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.849753 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951463 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.951540 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.952318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.952493 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:16 crc kubenswrapper[5121]: I0218 00:20:16.982617 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:17 crc kubenswrapper[5121]: I0218 00:20:17.101221 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:17 crc kubenswrapper[5121]: I0218 00:20:17.382247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959"] Feb 18 00:20:17 crc kubenswrapper[5121]: W0218 00:20:17.389554 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod763c3704_8ae0_4b52_9eb0_2dbef76acc66.slice/crio-45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca WatchSource:0}: Error finding container 45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca: Status 404 returned error can't find the container with id 45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232020 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="796638aaa83111f70c8b12404164778f38cb4b6acfc8bc74058fe2fb5032bfad" exitCode=0 Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232246 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"796638aaa83111f70c8b12404164778f38cb4b6acfc8bc74058fe2fb5032bfad"} Feb 18 00:20:18 crc kubenswrapper[5121]: I0218 00:20:18.232719 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerStarted","Data":"45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca"} Feb 18 00:20:19 crc kubenswrapper[5121]: I0218 00:20:19.241416 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="740503f0d1b4f51c669bb09f1be03c46ca2ca22ef2454b9181873fd5d8664fa1" exitCode=0 Feb 18 00:20:19 crc kubenswrapper[5121]: I0218 00:20:19.241570 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"740503f0d1b4f51c669bb09f1be03c46ca2ca22ef2454b9181873fd5d8664fa1"} Feb 18 00:20:20 crc kubenswrapper[5121]: I0218 00:20:20.249797 5121 generic.go:358] "Generic (PLEG): container finished" podID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerID="5728d0c2a69b93e313083b883cdd7419fcdd7c48fd330cdecd2170eba1e85741" exitCode=0 Feb 18 00:20:20 crc kubenswrapper[5121]: I0218 00:20:20.249857 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"5728d0c2a69b93e313083b883cdd7419fcdd7c48fd330cdecd2170eba1e85741"} Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.184896 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.197390 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.197549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308830 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308891 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.308922 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410766 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410887 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.410924 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.411626 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.411947 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.468567 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.521883 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.590133 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.713966 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714017 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714054 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") pod \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\" (UID: \"763c3704-8ae0-4b52-9eb0-2dbef76acc66\") " Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.714963 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle" (OuterVolumeSpecName: "bundle") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.724979 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk" (OuterVolumeSpecName: "kube-api-access-plwkk") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "kube-api-access-plwkk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.738013 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util" (OuterVolumeSpecName: "util") pod "763c3704-8ae0-4b52-9eb0-2dbef76acc66" (UID: "763c3704-8ae0-4b52-9eb0-2dbef76acc66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.808124 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59"] Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815061 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815094 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/763c3704-8ae0-4b52-9eb0-2dbef76acc66-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:21 crc kubenswrapper[5121]: I0218 00:20:21.815103 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-plwkk\" (UniqueName: \"kubernetes.io/projected/763c3704-8ae0-4b52-9eb0-2dbef76acc66-kube-api-access-plwkk\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.260814 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="8d2d78e70261a82b7fdf8e73d42b1e863dbcc4037a6ab2c099caee73e2c7adad" exitCode=0 Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.260922 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"8d2d78e70261a82b7fdf8e73d42b1e863dbcc4037a6ab2c099caee73e2c7adad"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.261217 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerStarted","Data":"0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265130 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" event={"ID":"763c3704-8ae0-4b52-9eb0-2dbef76acc66","Type":"ContainerDied","Data":"45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca"} Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265168 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45977fb2a3729a7fe70d257c0738a357012490c2308507126a1db74178d770ca" Feb 18 00:20:22 crc kubenswrapper[5121]: I0218 00:20:22.265237 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.847986 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850152 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850181 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850199 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="pull" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850205 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="pull" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850220 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="util" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850226 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="util" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.850524 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="763c3704-8ae0-4b52-9eb0-2dbef76acc66" containerName="extract" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.859922 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.864247 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.864869 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-hnzh6\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.865261 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.865836 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.974005 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.993632 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:26 crc kubenswrapper[5121]: I0218 00:20:26.998929 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.007052 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.007166 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-c7mzw\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.016585 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.020385 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.024255 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.036079 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.074985 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.144109 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8526\" (UniqueName: \"kubernetes.io/projected/ac0aed84-6c11-41de-9f31-3a7b2a313944-kube-api-access-g8526\") pod \"obo-prometheus-operator-9bc85b4bf-s7jq7\" (UID: \"ac0aed84-6c11-41de-9f31-3a7b2a313944\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177313 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177804 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177829 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.177897 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.182201 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.242514 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282577 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282684 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282711 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.282737 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.304005 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.308241 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5551a95c-fb98-465f-ba4f-3eacc393a47b-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d\" (UID: \"5551a95c-fb98-465f-ba4f-3eacc393a47b\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.308744 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.316284 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/34785a14-a8e1-49c9-bcca-3996487db06f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd\" (UID: \"34785a14-a8e1-49c9-bcca-3996487db06f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.319988 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.335522 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.349143 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.350674 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-4fsfp\"" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.355567 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.362458 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.378404 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerStarted","Data":"8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1"} Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.468115 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.486920 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.486985 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.589911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.590336 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.597590 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2277040f-ef0e-4742-a923-fff6ccf3e5aa-observability-operator-tls\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.613331 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nxj2\" (UniqueName: \"kubernetes.io/projected/2277040f-ef0e-4742-a923-fff6ccf3e5aa-kube-api-access-5nxj2\") pod \"observability-operator-85c68dddb-p6t4z\" (UID: \"2277040f-ef0e-4742-a923-fff6ccf3e5aa\") " pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: I0218 00:20:27.681035 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:27 crc kubenswrapper[5121]: W0218 00:20:27.885110 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac0aed84_6c11_41de_9f31_3a7b2a313944.slice/crio-e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e WatchSource:0}: Error finding container e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e: Status 404 returned error can't find the container with id e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e Feb 18 00:20:27 crc kubenswrapper[5121]: W0218 00:20:27.990395 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2277040f_ef0e_4742_a923_fff6ccf3e5aa.slice/crio-8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db WatchSource:0}: Error finding container 8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db: Status 404 returned error can't find the container with id 8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110125 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110209 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110243 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110257 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-p6t4z"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110278 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d"] Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.110346 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.112710 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-x8vcf\"" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.306075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.306583 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.388534 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1" exitCode=0 Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.388609 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"8f323c92ddff5143e5ae7f33bc8f01cc713ac73b28d09bee3949f4ded86ad0a1"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.389982 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" event={"ID":"34785a14-a8e1-49c9-bcca-3996487db06f","Type":"ContainerStarted","Data":"d704ff6ba35df71fea48447f1e3530ec049fd77f8e7656ef84d2d9eef4a6ceda"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.391275 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" event={"ID":"5551a95c-fb98-465f-ba4f-3eacc393a47b","Type":"ContainerStarted","Data":"a276e1a02df3c3d24d44f86a2434fcb73240ef71abcfba38fa640c0e83f1d234"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.393019 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" event={"ID":"ac0aed84-6c11-41de-9f31-3a7b2a313944","Type":"ContainerStarted","Data":"e68c3b8f49f26619c7037897cd25bbe7b4dbe054f41da0e71a1011da6eec437e"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.394154 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" event={"ID":"2277040f-ef0e-4742-a923-fff6ccf3e5aa","Type":"ContainerStarted","Data":"8979326394b35e78e50c96649cab7ff6a400601b9323ac17d59a99665a95a8db"} Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.408205 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.408461 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.409431 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/e476d06d-6937-425a-b4b9-ef90c4e141f5-openshift-service-ca\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.431518 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgdhp\" (UniqueName: \"kubernetes.io/projected/e476d06d-6937-425a-b4b9-ef90c4e141f5-kube-api-access-bgdhp\") pod \"perses-operator-669c9f96b5-6hzks\" (UID: \"e476d06d-6937-425a-b4b9-ef90c4e141f5\") " pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.433060 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:28 crc kubenswrapper[5121]: I0218 00:20:28.661675 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-6hzks"] Feb 18 00:20:28 crc kubenswrapper[5121]: W0218 00:20:28.665276 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode476d06d_6937_425a_b4b9_ef90c4e141f5.slice/crio-46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012 WatchSource:0}: Error finding container 46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012: Status 404 returned error can't find the container with id 46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012 Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.408132 5121 generic.go:358] "Generic (PLEG): container finished" podID="73314776-9f0b-451b-a26b-15edd18cc220" containerID="f27f1f3867efe4d258e9b2bc693777b0bbc85f57be6a03a3b428c474f9f8df82" exitCode=0 Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.408207 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"f27f1f3867efe4d258e9b2bc693777b0bbc85f57be6a03a3b428c474f9f8df82"} Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.410016 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" event={"ID":"e476d06d-6937-425a-b4b9-ef90c4e141f5","Type":"ContainerStarted","Data":"46c2a78483139558a504e5f3759dc18b529ea3cff46aa3c6af9d5c7d80ec4012"} Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.841977 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.848977 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851466 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851872 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.851922 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.852392 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-cdrdz\"" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.876095 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937534 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937613 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:29 crc kubenswrapper[5121]: I0218 00:20:29.937641 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038430 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038540 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.038576 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.044413 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-webhook-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.044791 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0abf839-f912-4864-83f7-db2da1ec1276-apiservice-cert\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.084997 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxlb6\" (UniqueName: \"kubernetes.io/projected/d0abf839-f912-4864-83f7-db2da1ec1276-kube-api-access-bxlb6\") pod \"elastic-operator-69d546b4c8-bwf25\" (UID: \"d0abf839-f912-4864-83f7-db2da1ec1276\") " pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.168967 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.788691 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.818156 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-69d546b4c8-bwf25"] Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851503 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851602 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.851720 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") pod \"73314776-9f0b-451b-a26b-15edd18cc220\" (UID: \"73314776-9f0b-451b-a26b-15edd18cc220\") " Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.853740 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle" (OuterVolumeSpecName: "bundle") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.868196 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z" (OuterVolumeSpecName: "kube-api-access-xgv2z") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "kube-api-access-xgv2z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.876540 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util" (OuterVolumeSpecName: "util") pod "73314776-9f0b-451b-a26b-15edd18cc220" (UID: "73314776-9f0b-451b-a26b-15edd18cc220"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:20:30 crc kubenswrapper[5121]: W0218 00:20:30.882868 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0abf839_f912_4864_83f7_db2da1ec1276.slice/crio-23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500 WatchSource:0}: Error finding container 23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500: Status 404 returned error can't find the container with id 23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500 Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953748 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953784 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgv2z\" (UniqueName: \"kubernetes.io/projected/73314776-9f0b-451b-a26b-15edd18cc220-kube-api-access-xgv2z\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:30 crc kubenswrapper[5121]: I0218 00:20:30.953795 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/73314776-9f0b-451b-a26b-15edd18cc220-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.444344 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" event={"ID":"d0abf839-f912-4864-83f7-db2da1ec1276","Type":"ContainerStarted","Data":"23423d771a2117bd619086556efb146cf307d73bb1bff553eef153b66f589500"} Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447532 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447538 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59" event={"ID":"73314776-9f0b-451b-a26b-15edd18cc220","Type":"ContainerDied","Data":"0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab"} Feb 18 00:20:31 crc kubenswrapper[5121]: I0218 00:20:31.447576 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbe2bdb8ef749d3a2310a107c55de17180ce8b5f9c877ec97435ce50dda94ab" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.499609 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500862 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500878 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500892 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="util" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500899 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="util" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500915 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="pull" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.500922 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="pull" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.501044 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="73314776-9f0b-451b-a26b-15edd18cc220" containerName="extract" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.507445 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509437 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509479 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-vr9dg\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.509489 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.515876 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.552006 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" event={"ID":"d0abf839-f912-4864-83f7-db2da1ec1276","Type":"ContainerStarted","Data":"8db66948574d1cc857ae71290892f2b0782b6e317357e587729370fea9d500e6"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.554603 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" event={"ID":"2277040f-ef0e-4742-a923-fff6ccf3e5aa","Type":"ContainerStarted","Data":"1b2d36610474e6a39f98ff5ad509058ce41d4c7dfbb0bce3ed9bbec348d89c97"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.555428 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.558064 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" event={"ID":"34785a14-a8e1-49c9-bcca-3996487db06f","Type":"ContainerStarted","Data":"9ce0353ff35db1ba4632d0400602d2a7f8d71f1d011029b1fbe2dc48d06441d1"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.562059 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.562322 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" event={"ID":"5551a95c-fb98-465f-ba4f-3eacc393a47b","Type":"ContainerStarted","Data":"96b3e633c6c5c13cdaf3aeb263cf087e66281caab0da3161f74bc21ae756d8ef"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.564140 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" event={"ID":"e476d06d-6937-425a-b4b9-ef90c4e141f5","Type":"ContainerStarted","Data":"b0a2af128183350148df491d02a83b42fb7027201efea38e535a00e58acbaecd"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.564278 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.565247 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" event={"ID":"ac0aed84-6c11-41de-9f31-3a7b2a313944","Type":"ContainerStarted","Data":"dca373cb3c86dc53fa750829d288106a1eb0a086ee8fcdc81718f50c0d546240"} Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.577957 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-69d546b4c8-bwf25" podStartSLOduration=3.009646082 podStartE2EDuration="14.577937204s" podCreationTimestamp="2026-02-18 00:20:29 +0000 UTC" firstStartedPulling="2026-02-18 00:20:30.906437756 +0000 UTC m=+714.420895481" lastFinishedPulling="2026-02-18 00:20:42.474728848 +0000 UTC m=+725.989186603" observedRunningTime="2026-02-18 00:20:43.571702444 +0000 UTC m=+727.086160239" watchObservedRunningTime="2026-02-18 00:20:43.577937204 +0000 UTC m=+727.092394939" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.601399 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" podStartSLOduration=2.791618133 podStartE2EDuration="16.60137737s" podCreationTimestamp="2026-02-18 00:20:27 +0000 UTC" firstStartedPulling="2026-02-18 00:20:28.669286239 +0000 UTC m=+712.183743974" lastFinishedPulling="2026-02-18 00:20:42.479045476 +0000 UTC m=+725.993503211" observedRunningTime="2026-02-18 00:20:43.591176283 +0000 UTC m=+727.105634018" watchObservedRunningTime="2026-02-18 00:20:43.60137737 +0000 UTC m=+727.115835135" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.603983 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.604523 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.641065 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd" podStartSLOduration=3.11165487 podStartE2EDuration="17.641044428s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.943554711 +0000 UTC m=+711.458012436" lastFinishedPulling="2026-02-18 00:20:42.472944249 +0000 UTC m=+725.987401994" observedRunningTime="2026-02-18 00:20:43.638323534 +0000 UTC m=+727.152781279" watchObservedRunningTime="2026-02-18 00:20:43.641044428 +0000 UTC m=+727.155502163" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.668099 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d" podStartSLOduration=3.223241769 podStartE2EDuration="17.668077642s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:28.058161783 +0000 UTC m=+711.572619518" lastFinishedPulling="2026-02-18 00:20:42.502997656 +0000 UTC m=+726.017455391" observedRunningTime="2026-02-18 00:20:43.662330876 +0000 UTC m=+727.176788641" watchObservedRunningTime="2026-02-18 00:20:43.668077642 +0000 UTC m=+727.182535407" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.705819 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.705911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.706318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.728834 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-p6t4z" podStartSLOduration=2.196204638 podStartE2EDuration="16.728805711s" podCreationTimestamp="2026-02-18 00:20:27 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.992795221 +0000 UTC m=+711.507252956" lastFinishedPulling="2026-02-18 00:20:42.525396294 +0000 UTC m=+726.039854029" observedRunningTime="2026-02-18 00:20:43.706763863 +0000 UTC m=+727.221221598" watchObservedRunningTime="2026-02-18 00:20:43.728805711 +0000 UTC m=+727.243263456" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.731937 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-s7jq7" podStartSLOduration=3.131782322 podStartE2EDuration="17.731923927s" podCreationTimestamp="2026-02-18 00:20:26 +0000 UTC" firstStartedPulling="2026-02-18 00:20:27.891567225 +0000 UTC m=+711.406024960" lastFinishedPulling="2026-02-18 00:20:42.49170883 +0000 UTC m=+726.006166565" observedRunningTime="2026-02-18 00:20:43.728354709 +0000 UTC m=+727.242812454" watchObservedRunningTime="2026-02-18 00:20:43.731923927 +0000 UTC m=+727.246381662" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.756580 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt2x5\" (UniqueName: \"kubernetes.io/projected/f30b7d1c-327c-49cc-9f8e-4baf945b1e11-kube-api-access-wt2x5\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-59qlf\" (UID: \"f30b7d1c-327c-49cc-9f8e-4baf945b1e11\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:43 crc kubenswrapper[5121]: I0218 00:20:43.823408 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" Feb 18 00:20:44 crc kubenswrapper[5121]: I0218 00:20:44.046978 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf"] Feb 18 00:20:44 crc kubenswrapper[5121]: I0218 00:20:44.571233 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" event={"ID":"f30b7d1c-327c-49cc-9f8e-4baf945b1e11","Type":"ContainerStarted","Data":"302ca29521be5d53821f7083bc37ae1913ae9f6ff3a07ae21b11b6cb63b8a256"} Feb 18 00:20:48 crc kubenswrapper[5121]: I0218 00:20:48.602695 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" event={"ID":"f30b7d1c-327c-49cc-9f8e-4baf945b1e11","Type":"ContainerStarted","Data":"e2c621c76960c6646d92782b56d4b7b817c5acfbc21ccef71c3d3727f71862ab"} Feb 18 00:20:48 crc kubenswrapper[5121]: I0218 00:20:48.630511 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-59qlf" podStartSLOduration=2.007822073 podStartE2EDuration="5.630482912s" podCreationTimestamp="2026-02-18 00:20:43 +0000 UTC" firstStartedPulling="2026-02-18 00:20:44.047782846 +0000 UTC m=+727.562240581" lastFinishedPulling="2026-02-18 00:20:47.670443685 +0000 UTC m=+731.184901420" observedRunningTime="2026-02-18 00:20:48.624686625 +0000 UTC m=+732.139144380" watchObservedRunningTime="2026-02-18 00:20:48.630482912 +0000 UTC m=+732.144940657" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.064944 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.079350 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.080066 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.093364 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.098680 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099245 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099303 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099530 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-6w67p\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.099828 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100039 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100194 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.100355 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209828 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209865 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209895 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.209924 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210075 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210160 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210187 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210232 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210251 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210391 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210429 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210457 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210490 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.210510 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.312803 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313006 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313068 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313357 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313456 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313459 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313609 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313683 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313866 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313907 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.313978 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314182 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314236 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314366 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314392 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314437 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.314918 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315088 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315249 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315546 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.315774 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.320250 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.323964 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.328378 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.328840 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330262 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330414 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.330917 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f3bc26d0-c80d-412d-9370-b821cdb7c2d7-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f3bc26d0-c80d-412d-9370-b821cdb7c2d7\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.405608 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:20:51 crc kubenswrapper[5121]: I0218 00:20:51.947011 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.336920 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.346487 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349214 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349561 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-gh7sh\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.349613 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.352291 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.435866 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.435918 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.447531 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.451578 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.454849 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-kzdgf\"" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.461758 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537619 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537735 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537798 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.537877 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.556245 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pqfn\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-kube-api-access-9pqfn\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.557928 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b1211244-3ab3-496b-9610-d2c6d4943528-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-9qb4h\" (UID: \"b1211244-3ab3-496b-9610-d2c6d4943528\") " pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.634007 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"d0bbdb692e272e5a32a127d8b6e7142c8f204f4cf6ce78888add4ccb0480e496"} Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.639114 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.639214 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.655399 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.656149 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t77gn\" (UniqueName: \"kubernetes.io/projected/244cd2fe-9d19-45ba-9d3c-2fa6d153f27c-kube-api-access-t77gn\") pod \"cert-manager-cainjector-8966b78d4-n4bv6\" (UID: \"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.662838 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.767058 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" Feb 18 00:20:52 crc kubenswrapper[5121]: I0218 00:20:52.859814 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-9qb4h"] Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.151525 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-n4bv6"] Feb 18 00:20:53 crc kubenswrapper[5121]: W0218 00:20:53.165794 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod244cd2fe_9d19_45ba_9d3c_2fa6d153f27c.slice/crio-47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400 WatchSource:0}: Error finding container 47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400: Status 404 returned error can't find the container with id 47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400 Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.645519 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" event={"ID":"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c","Type":"ContainerStarted","Data":"47328a32a330777cc3fbaab5169be605c259c293197ada409853687f0f348400"} Feb 18 00:20:53 crc kubenswrapper[5121]: I0218 00:20:53.646958 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" event={"ID":"b1211244-3ab3-496b-9610-d2c6d4943528","Type":"ContainerStarted","Data":"4600ae546a819a152cb51fb1aa0c33b6b59050a7d39bf6bd24a726ba24b7559f"} Feb 18 00:20:54 crc kubenswrapper[5121]: I0218 00:20:54.574580 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-6hzks" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.728295 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.730860 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" event={"ID":"b1211244-3ab3-496b-9610-d2c6d4943528","Type":"ContainerStarted","Data":"de6b554b3496d978ccc756a2b96804c70e6df38af477a1ae7dc2ecfe763ffa09"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.731377 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.733928 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" event={"ID":"244cd2fe-9d19-45ba-9d3c-2fa6d153f27c","Type":"ContainerStarted","Data":"8b6507ff3afc45212015775722a1804b5760ba9a343bc8c6051e1b09249e89d9"} Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.813703 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-n4bv6" podStartSLOduration=2.281660903 podStartE2EDuration="14.813644809s" podCreationTimestamp="2026-02-18 00:20:52 +0000 UTC" firstStartedPulling="2026-02-18 00:20:53.167546 +0000 UTC m=+736.682003735" lastFinishedPulling="2026-02-18 00:21:05.699529896 +0000 UTC m=+749.213987641" observedRunningTime="2026-02-18 00:21:06.805356133 +0000 UTC m=+750.319813908" watchObservedRunningTime="2026-02-18 00:21:06.813644809 +0000 UTC m=+750.328102604" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.845639 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" podStartSLOduration=2.020910529 podStartE2EDuration="14.845611127s" podCreationTimestamp="2026-02-18 00:20:52 +0000 UTC" firstStartedPulling="2026-02-18 00:20:52.87487071 +0000 UTC m=+736.389328445" lastFinishedPulling="2026-02-18 00:21:05.699571268 +0000 UTC m=+749.214029043" observedRunningTime="2026-02-18 00:21:06.829995302 +0000 UTC m=+750.344453067" watchObservedRunningTime="2026-02-18 00:21:06.845611127 +0000 UTC m=+750.360068882" Feb 18 00:21:06 crc kubenswrapper[5121]: I0218 00:21:06.976319 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:21:07 crc kubenswrapper[5121]: I0218 00:21:07.010397 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 18 00:21:08 crc kubenswrapper[5121]: I0218 00:21:08.754602 5121 generic.go:358] "Generic (PLEG): container finished" podID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerID="0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da" exitCode=0 Feb 18 00:21:08 crc kubenswrapper[5121]: I0218 00:21:08.755106 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerDied","Data":"0dae15b2c696b01c1bdd0f403a7e16a2f038f646d4131a97668a3c7b655618da"} Feb 18 00:21:09 crc kubenswrapper[5121]: I0218 00:21:09.767500 5121 generic.go:358] "Generic (PLEG): container finished" podID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerID="902e221ffd38b6e922dc7e07fcd12be3e479632041e186f2ffa8d8c89de796a1" exitCode=0 Feb 18 00:21:09 crc kubenswrapper[5121]: I0218 00:21:09.767588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerDied","Data":"902e221ffd38b6e922dc7e07fcd12be3e479632041e186f2ffa8d8c89de796a1"} Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.427892 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.804384 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.804603 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.807791 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-qrn44\"" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.982066 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:11 crc kubenswrapper[5121]: I0218 00:21:11.982226 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.083527 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.084138 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.122061 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-bound-sa-token\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.122549 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s9wb\" (UniqueName: \"kubernetes.io/projected/8dac9f2e-68b4-409b-9fd2-bfc0bd928235-kube-api-access-5s9wb\") pod \"cert-manager-759f64656b-mkxwj\" (UID: \"8dac9f2e-68b4-409b-9fd2-bfc0bd928235\") " pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.134696 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-mkxwj" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.425123 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-mkxwj"] Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.748830 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-9qb4h" Feb 18 00:21:12 crc kubenswrapper[5121]: I0218 00:21:12.804978 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-mkxwj" event={"ID":"8dac9f2e-68b4-409b-9fd2-bfc0bd928235","Type":"ContainerStarted","Data":"906564509c0494de56e628619f53ea8b3abbe7934d74d754fd5bb9465ce9bd3b"} Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.837221 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f3bc26d0-c80d-412d-9370-b821cdb7c2d7","Type":"ContainerStarted","Data":"7b15826d2fbec8aace9fd8e362717eceab844473f76cb2855f885ba343c8a0d2"} Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.837550 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:21:15 crc kubenswrapper[5121]: I0218 00:21:15.891544 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=10.904233018 podStartE2EDuration="24.891520334s" podCreationTimestamp="2026-02-18 00:20:51 +0000 UTC" firstStartedPulling="2026-02-18 00:20:51.96514308 +0000 UTC m=+735.479600815" lastFinishedPulling="2026-02-18 00:21:05.952430376 +0000 UTC m=+749.466888131" observedRunningTime="2026-02-18 00:21:15.884918516 +0000 UTC m=+759.399376311" watchObservedRunningTime="2026-02-18 00:21:15.891520334 +0000 UTC m=+759.405978109" Feb 18 00:21:17 crc kubenswrapper[5121]: I0218 00:21:17.849423 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-mkxwj" event={"ID":"8dac9f2e-68b4-409b-9fd2-bfc0bd928235","Type":"ContainerStarted","Data":"064094ec321c65659fbb51a4bfd879cd6736257adc085796713be1c1756c17fe"} Feb 18 00:21:18 crc kubenswrapper[5121]: I0218 00:21:18.891005 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-mkxwj" podStartSLOduration=7.890916894 podStartE2EDuration="7.890916894s" podCreationTimestamp="2026-02-18 00:21:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:21:18.877312606 +0000 UTC m=+762.391770431" watchObservedRunningTime="2026-02-18 00:21:18.890916894 +0000 UTC m=+762.405374669" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.443265 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.453384 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.453575 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.565845 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.565959 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.566007 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.667929 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668023 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668060 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668565 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.668587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.696680 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"redhat-operators-8gv79\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:23 crc kubenswrapper[5121]: I0218 00:21:23.797272 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.039025 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897726 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0" exitCode=0 Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897802 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0"} Feb 18 00:21:24 crc kubenswrapper[5121]: I0218 00:21:24.897826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"22fc1022a88ca0ba4f6907e3d2b516afb76b997bbeee96263623281d6f180a6d"} Feb 18 00:21:25 crc kubenswrapper[5121]: I0218 00:21:25.914408 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc"} Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.925791 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc" exitCode=0 Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.925843 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc"} Feb 18 00:21:26 crc kubenswrapper[5121]: I0218 00:21:26.972723 5121 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f3bc26d0-c80d-412d-9370-b821cdb7c2d7" containerName="elasticsearch" probeResult="failure" output=< Feb 18 00:21:26 crc kubenswrapper[5121]: {"timestamp": "2026-02-18T00:21:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 18 00:21:26 crc kubenswrapper[5121]: > Feb 18 00:21:27 crc kubenswrapper[5121]: I0218 00:21:27.934617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerStarted","Data":"2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10"} Feb 18 00:21:27 crc kubenswrapper[5121]: I0218 00:21:27.957550 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8gv79" podStartSLOduration=4.236541001 podStartE2EDuration="4.957533335s" podCreationTimestamp="2026-02-18 00:21:23 +0000 UTC" firstStartedPulling="2026-02-18 00:21:24.898480084 +0000 UTC m=+768.412937819" lastFinishedPulling="2026-02-18 00:21:25.619472378 +0000 UTC m=+769.133930153" observedRunningTime="2026-02-18 00:21:27.95548727 +0000 UTC m=+771.469945005" watchObservedRunningTime="2026-02-18 00:21:27.957533335 +0000 UTC m=+771.471991080" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.122765 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.760818 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.839233 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.839329 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.852231 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.904894 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.905012 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:32 crc kubenswrapper[5121]: I0218 00:21:32.905380 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007029 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007468 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.007898 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.008575 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0f05b854-8a2a-4d4e-84e4-194616da0cd1-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.033590 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-652fn\" (UniqueName: \"kubernetes.io/projected/0f05b854-8a2a-4d4e-84e4-194616da0cd1-kube-api-access-652fn\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0f05b854-8a2a-4d4e-84e4-194616da0cd1\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.157672 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.620163 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.797602 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.797667 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:33 crc kubenswrapper[5121]: I0218 00:21:33.982668 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerStarted","Data":"0a774bf0d1c1014df993d26326d2d2252f922bf686e6d3251333ff73eb48db32"} Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.544640 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.544819 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:21:34 crc kubenswrapper[5121]: I0218 00:21:34.851893 5121 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8gv79" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" probeResult="failure" output=< Feb 18 00:21:34 crc kubenswrapper[5121]: timeout: failed to connect service ":50051" within 1s Feb 18 00:21:34 crc kubenswrapper[5121]: > Feb 18 00:21:41 crc kubenswrapper[5121]: I0218 00:21:41.067616 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f05b854-8a2a-4d4e-84e4-194616da0cd1" containerID="aa393960edf195860bbde9c83a5696dcca572211bb64767d53b43bf4fbe26e06" exitCode=0 Feb 18 00:21:41 crc kubenswrapper[5121]: I0218 00:21:41.067824 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerDied","Data":"aa393960edf195860bbde9c83a5696dcca572211bb64767d53b43bf4fbe26e06"} Feb 18 00:21:43 crc kubenswrapper[5121]: I0218 00:21:43.847116 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:43 crc kubenswrapper[5121]: I0218 00:21:43.894341 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.089829 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.094296 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0f05b854-8a2a-4d4e-84e4-194616da0cd1","Type":"ContainerStarted","Data":"04abe4d29395dc760207f73238b22eecf9a12cbe75255620b92ba4f333dde229"} Feb 18 00:21:44 crc kubenswrapper[5121]: I0218 00:21:44.122811 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=2.105460764 podStartE2EDuration="12.122785959s" podCreationTimestamp="2026-02-18 00:21:32 +0000 UTC" firstStartedPulling="2026-02-18 00:21:33.640028394 +0000 UTC m=+777.154486169" lastFinishedPulling="2026-02-18 00:21:43.657353629 +0000 UTC m=+787.171811364" observedRunningTime="2026-02-18 00:21:44.113233581 +0000 UTC m=+787.627691386" watchObservedRunningTime="2026-02-18 00:21:44.122785959 +0000 UTC m=+787.637243724" Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.104945 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8gv79" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" containerID="cri-o://2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" gracePeriod=2 Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.943704 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.963781 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.963943 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:45 crc kubenswrapper[5121]: I0218 00:21:45.999189 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:45.999286 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:45.999325 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100144 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100227 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100274 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100986 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.100984 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.112505 5121 generic.go:358] "Generic (PLEG): container finished" podID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerID="2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" exitCode=0 Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.112596 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10"} Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.124256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.281032 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.308864 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403714 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403766 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.403817 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") pod \"19a6950a-ef4b-4630-8fb9-700371df4f58\" (UID: \"19a6950a-ef4b-4630-8fb9-700371df4f58\") " Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.406957 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities" (OuterVolumeSpecName: "utilities") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.414549 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p" (OuterVolumeSpecName: "kube-api-access-tb64p") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "kube-api-access-tb64p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.505709 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tb64p\" (UniqueName: \"kubernetes.io/projected/19a6950a-ef4b-4630-8fb9-700371df4f58-kube-api-access-tb64p\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.505756 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.515122 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19a6950a-ef4b-4630-8fb9-700371df4f58" (UID: "19a6950a-ef4b-4630-8fb9-700371df4f58"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.607080 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19a6950a-ef4b-4630-8fb9-700371df4f58-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:46 crc kubenswrapper[5121]: I0218 00:21:46.723052 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm"] Feb 18 00:21:46 crc kubenswrapper[5121]: W0218 00:21:46.735107 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode665d44f_e92f_4675_8b1c_f1f169d2452c.slice/crio-2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440 WatchSource:0}: Error finding container 2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440: Status 404 returned error can't find the container with id 2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440 Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.135599 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8gv79" event={"ID":"19a6950a-ef4b-4630-8fb9-700371df4f58","Type":"ContainerDied","Data":"22fc1022a88ca0ba4f6907e3d2b516afb76b997bbeee96263623281d6f180a6d"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.136176 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8gv79" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.138558 5121 scope.go:117] "RemoveContainer" containerID="2bcb27c97a0400d83a7458699f002dd465425af0f7e523f7c37a733c1e61da10" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154592 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="9e7ce08e59311aa1181158e5f1cfa0485216113c20d6904b842bff7e33df6f9c" exitCode=0 Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"9e7ce08e59311aa1181158e5f1cfa0485216113c20d6904b842bff7e33df6f9c"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.154790 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerStarted","Data":"2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440"} Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.198175 5121 scope.go:117] "RemoveContainer" containerID="14893e2466a0b6b33863431c6e2560c0c959d1d72aab0234bfed204e3e3924bc" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.202374 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.209146 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8gv79"] Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.224572 5121 scope.go:117] "RemoveContainer" containerID="37899606b7230e219ba1ede5dfa2904ee71acfadba88a2a7524ab839ad7954b0" Feb 18 00:21:47 crc kubenswrapper[5121]: I0218 00:21:47.285168 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" path="/var/lib/kubelet/pods/19a6950a-ef4b-4630-8fb9-700371df4f58/volumes" Feb 18 00:21:48 crc kubenswrapper[5121]: I0218 00:21:48.165842 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="72b07f22ea402db3bf59859afad0de3439909b8057b57cf2092c4b0d44df4a84" exitCode=0 Feb 18 00:21:48 crc kubenswrapper[5121]: I0218 00:21:48.165925 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"72b07f22ea402db3bf59859afad0de3439909b8057b57cf2092c4b0d44df4a84"} Feb 18 00:21:49 crc kubenswrapper[5121]: I0218 00:21:49.181848 5121 generic.go:358] "Generic (PLEG): container finished" podID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerID="6780c13276ae389f3e5964804021a90c86a92add5ae878279209ae2ac73bdb65" exitCode=0 Feb 18 00:21:49 crc kubenswrapper[5121]: I0218 00:21:49.182058 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"6780c13276ae389f3e5964804021a90c86a92add5ae878279209ae2ac73bdb65"} Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.552244 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671152 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671239 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.671395 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") pod \"e665d44f-e92f-4675-8b1c-f1f169d2452c\" (UID: \"e665d44f-e92f-4675-8b1c-f1f169d2452c\") " Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.672854 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle" (OuterVolumeSpecName: "bundle") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.679357 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc" (OuterVolumeSpecName: "kube-api-access-hfssc") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "kube-api-access-hfssc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.689166 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util" (OuterVolumeSpecName: "util") pod "e665d44f-e92f-4675-8b1c-f1f169d2452c" (UID: "e665d44f-e92f-4675-8b1c-f1f169d2452c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772724 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772780 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfssc\" (UniqueName: \"kubernetes.io/projected/e665d44f-e92f-4675-8b1c-f1f169d2452c-kube-api-access-hfssc\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:50 crc kubenswrapper[5121]: I0218 00:21:50.772798 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e665d44f-e92f-4675-8b1c-f1f169d2452c-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224316 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" event={"ID":"e665d44f-e92f-4675-8b1c-f1f169d2452c","Type":"ContainerDied","Data":"2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440"} Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224408 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f1ff3103d3b5c646769b5109a59f87837ae1f2f00976b363ba73d51b4ab6440" Feb 18 00:21:51 crc kubenswrapper[5121]: I0218 00:21:51.224520 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661cqmsm" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.794581 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795779 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795794 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795809 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="util" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795816 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="util" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795825 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-utilities" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795833 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-utilities" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795846 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-content" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795853 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="extract-content" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795863 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795870 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795888 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="pull" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.795895 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="pull" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.796007 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e665d44f-e92f-4675-8b1c-f1f169d2452c" containerName="extract" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.796019 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="19a6950a-ef4b-4630-8fb9-700371df4f58" containerName="registry-server" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953104 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953144 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.953314 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.955681 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-966zk\"" Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.958451 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:57 crc kubenswrapper[5121]: I0218 00:21:57.958560 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017324 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017672 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017728 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017758 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.017785 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119120 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119377 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119470 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119583 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119688 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119861 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.119965 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/a9bb59e6-a92e-442e-87e6-b7331ba07de6-runner\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.120059 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.137584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"certified-operators-sq64t\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.137597 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5mkx\" (UniqueName: \"kubernetes.io/projected/a9bb59e6-a92e-442e-87e6-b7331ba07de6-kube-api-access-s5mkx\") pod \"smart-gateway-operator-97b85656c-zh9kd\" (UID: \"a9bb59e6-a92e-442e-87e6-b7331ba07de6\") " pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.323193 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.330999 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.754809 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-zh9kd"] Feb 18 00:21:58 crc kubenswrapper[5121]: I0218 00:21:58.816832 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.296795 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" event={"ID":"a9bb59e6-a92e-442e-87e6-b7331ba07de6","Type":"ContainerStarted","Data":"18627561ca4dc622fb61ff483608e16e766edcd5c29d43faddcc6c4366b100c8"} Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299590 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f" exitCode=0 Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299631 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f"} Feb 18 00:21:59 crc kubenswrapper[5121]: I0218 00:21:59.299690 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"cf03be90ca07127e64a0ee0a501bbfaad649d2c09d8407ea422896de154f3853"} Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.138075 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.144338 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.145910 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.183624 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.183846 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.184048 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.245420 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.312338 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3"} Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.346571 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.376514 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"auto-csr-approver-29522902-4gc7s\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.537523 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:00 crc kubenswrapper[5121]: I0218 00:22:00.758017 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:22:00 crc kubenswrapper[5121]: W0218 00:22:00.760015 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode811a594_9ca7_4167_807e_e39bd75b7912.slice/crio-47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2 WatchSource:0}: Error finding container 47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2: Status 404 returned error can't find the container with id 47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2 Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.321686 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3" exitCode=0 Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.322140 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3"} Feb 18 00:22:01 crc kubenswrapper[5121]: I0218 00:22:01.325361 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerStarted","Data":"47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2"} Feb 18 00:22:02 crc kubenswrapper[5121]: I0218 00:22:02.334944 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerStarted","Data":"ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6"} Feb 18 00:22:02 crc kubenswrapper[5121]: I0218 00:22:02.357761 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sq64t" podStartSLOduration=4.565105511 podStartE2EDuration="5.357743908s" podCreationTimestamp="2026-02-18 00:21:57 +0000 UTC" firstStartedPulling="2026-02-18 00:21:59.30039681 +0000 UTC m=+802.814854545" lastFinishedPulling="2026-02-18 00:22:00.093035207 +0000 UTC m=+803.607492942" observedRunningTime="2026-02-18 00:22:02.35303012 +0000 UTC m=+805.867487855" watchObservedRunningTime="2026-02-18 00:22:02.357743908 +0000 UTC m=+805.872201643" Feb 18 00:22:04 crc kubenswrapper[5121]: I0218 00:22:04.544942 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:22:04 crc kubenswrapper[5121]: I0218 00:22:04.545335 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.331791 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.332179 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.384575 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:08 crc kubenswrapper[5121]: I0218 00:22:08.445128 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:10 crc kubenswrapper[5121]: I0218 00:22:10.679858 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:10 crc kubenswrapper[5121]: I0218 00:22:10.680374 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sq64t" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" containerID="cri-o://ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" gracePeriod=2 Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.419000 5121 generic.go:358] "Generic (PLEG): container finished" podID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerID="ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" exitCode=0 Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.419176 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6"} Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.845052 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.930987 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.931059 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.931106 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") pod \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\" (UID: \"a0ab6087-f6f1-4788-bb13-52cf544d71ae\") " Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.938892 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75" (OuterVolumeSpecName: "kube-api-access-rpx75") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "kube-api-access-rpx75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.940062 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities" (OuterVolumeSpecName: "utilities") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:11 crc kubenswrapper[5121]: I0218 00:22:11.961919 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0ab6087-f6f1-4788-bb13-52cf544d71ae" (UID: "a0ab6087-f6f1-4788-bb13-52cf544d71ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032549 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rpx75\" (UniqueName: \"kubernetes.io/projected/a0ab6087-f6f1-4788-bb13-52cf544d71ae-kube-api-access-rpx75\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032584 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.032625 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0ab6087-f6f1-4788-bb13-52cf544d71ae-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.428960 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sq64t" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.428969 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sq64t" event={"ID":"a0ab6087-f6f1-4788-bb13-52cf544d71ae","Type":"ContainerDied","Data":"cf03be90ca07127e64a0ee0a501bbfaad649d2c09d8407ea422896de154f3853"} Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.429080 5121 scope.go:117] "RemoveContainer" containerID="ec846805bbd31bc35d225e5e5d17e3b432a2b85b64e8200d22557f1314b23da6" Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.480322 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:12 crc kubenswrapper[5121]: I0218 00:22:12.480574 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sq64t"] Feb 18 00:22:13 crc kubenswrapper[5121]: I0218 00:22:13.282808 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" path="/var/lib/kubelet/pods/a0ab6087-f6f1-4788-bb13-52cf544d71ae/volumes" Feb 18 00:22:15 crc kubenswrapper[5121]: I0218 00:22:15.471433 5121 scope.go:117] "RemoveContainer" containerID="5391b6858a94d6cbb8e0135bf8b5f286822017476bf2a7f03c9d9116163d9ca3" Feb 18 00:22:15 crc kubenswrapper[5121]: I0218 00:22:15.784761 5121 scope.go:117] "RemoveContainer" containerID="c994ccb69aff0fe06b89699777211626107a7c2ca19cff547eaf6f6272978b7f" Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.459756 5121 generic.go:358] "Generic (PLEG): container finished" podID="e811a594-9ca7-4167-807e-e39bd75b7912" containerID="b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb" exitCode=0 Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.459883 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerDied","Data":"b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb"} Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.461238 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" event={"ID":"a9bb59e6-a92e-442e-87e6-b7331ba07de6","Type":"ContainerStarted","Data":"b0138f1a09b39de8325c2baba5c44a3d5d29573228c24cabd31258a2b7309d15"} Feb 18 00:22:16 crc kubenswrapper[5121]: I0218 00:22:16.483856 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-zh9kd" podStartSLOduration=2.179738211 podStartE2EDuration="19.483840867s" podCreationTimestamp="2026-02-18 00:21:57 +0000 UTC" firstStartedPulling="2026-02-18 00:21:58.763923014 +0000 UTC m=+802.278380749" lastFinishedPulling="2026-02-18 00:22:16.06802566 +0000 UTC m=+819.582483405" observedRunningTime="2026-02-18 00:22:16.483812006 +0000 UTC m=+819.998269751" watchObservedRunningTime="2026-02-18 00:22:16.483840867 +0000 UTC m=+819.998298602" Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.800002 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.913428 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") pod \"e811a594-9ca7-4167-807e-e39bd75b7912\" (UID: \"e811a594-9ca7-4167-807e-e39bd75b7912\") " Feb 18 00:22:17 crc kubenswrapper[5121]: I0218 00:22:17.921314 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h" (OuterVolumeSpecName: "kube-api-access-q4k7h") pod "e811a594-9ca7-4167-807e-e39bd75b7912" (UID: "e811a594-9ca7-4167-807e-e39bd75b7912"). InnerVolumeSpecName "kube-api-access-q4k7h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.015012 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4k7h\" (UniqueName: \"kubernetes.io/projected/e811a594-9ca7-4167-807e-e39bd75b7912-kube-api-access-q4k7h\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480247 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" event={"ID":"e811a594-9ca7-4167-807e-e39bd75b7912","Type":"ContainerDied","Data":"47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2"} Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480295 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47038e90c2034e4011a4b42b4b6916b44bea206477c18360129a174976068aa2" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.480261 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522902-4gc7s" Feb 18 00:22:18 crc kubenswrapper[5121]: E0218 00:22:18.588808 5121 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode811a594_9ca7_4167_807e_e39bd75b7912.slice\": RecentStats: unable to find data in memory cache]" Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.857215 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:22:18 crc kubenswrapper[5121]: I0218 00:22:18.867926 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522896-wgmcl"] Feb 18 00:22:19 crc kubenswrapper[5121]: I0218 00:22:19.280859 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bd0236-52ea-4369-9891-8cf9e1dcff2b" path="/var/lib/kubelet/pods/17bd0236-52ea-4369-9891-8cf9e1dcff2b/volumes" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.544983 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.545473 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.545545 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.546555 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:22:34 crc kubenswrapper[5121]: I0218 00:22:34.546746 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" gracePeriod=600 Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.635693 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" exitCode=0 Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.635760 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2"} Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.636734 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} Feb 18 00:22:35 crc kubenswrapper[5121]: I0218 00:22:35.636798 5121 scope.go:117] "RemoveContainer" containerID="080bd236d43345c652c365ed8853a29e7dd709d19ef36c1726a3dcdaac7b9c44" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.064236 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066152 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066203 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066286 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-utilities" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066305 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-utilities" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066348 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-content" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066365 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="extract-content" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066388 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066403 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.066688 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" containerName="oc" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.067979 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a0ab6087-f6f1-4788-bb13-52cf544d71ae" containerName="registry-server" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.100679 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.100911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.103252 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227339 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227417 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.227646 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329805 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.329979 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.331524 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.335488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.370377 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5857\" (UniqueName: \"kubernetes.io/projected/d9c883f8-94d3-4038-89dd-b6b0bf1e618a-kube-api-access-k5857\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"d9c883f8-94d3-4038-89dd-b6b0bf1e618a\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.423113 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 18 00:22:36 crc kubenswrapper[5121]: I0218 00:22:36.660172 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.656977 5121 generic.go:358] "Generic (PLEG): container finished" podID="d9c883f8-94d3-4038-89dd-b6b0bf1e618a" containerID="2a43990773b68cdfac86367ff1fbc549dc09b16508ec0e914064e112cb7c1e87" exitCode=0 Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.657053 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerDied","Data":"2a43990773b68cdfac86367ff1fbc549dc09b16508ec0e914064e112cb7c1e87"} Feb 18 00:22:37 crc kubenswrapper[5121]: I0218 00:22:37.657497 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerStarted","Data":"15866c64ac273331b31cb39b7ec7a65ea88c1f333734beac240806d0e1d8067b"} Feb 18 00:22:39 crc kubenswrapper[5121]: I0218 00:22:39.675470 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"d9c883f8-94d3-4038-89dd-b6b0bf1e618a","Type":"ContainerStarted","Data":"3d8f91df2725f1a4a05396bd2101aed458f2a0ac51b032506c3182a1c1b9c823"} Feb 18 00:22:39 crc kubenswrapper[5121]: I0218 00:22:39.699066 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.942598706 podStartE2EDuration="3.699041842s" podCreationTimestamp="2026-02-18 00:22:36 +0000 UTC" firstStartedPulling="2026-02-18 00:22:37.658297138 +0000 UTC m=+841.172754903" lastFinishedPulling="2026-02-18 00:22:38.414740264 +0000 UTC m=+841.929198039" observedRunningTime="2026-02-18 00:22:39.697537021 +0000 UTC m=+843.211994826" watchObservedRunningTime="2026-02-18 00:22:39.699041842 +0000 UTC m=+843.213499617" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.296417 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.306300 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313544 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313745 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.313933 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.322983 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.415911 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.416093 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.416175 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.417004 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.417388 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.450259 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.627950 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.893447 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.899685 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.902606 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.907353 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925391 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925450 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:41 crc kubenswrapper[5121]: I0218 00:22:41.925512 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027128 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027203 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.027249 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.029113 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.029412 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.049615 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.108185 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h"] Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.220697 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.472638 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n"] Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.490297 5121 scope.go:117] "RemoveContainer" containerID="07a6717201c9b26b738c890c1d084e1f83f398a3b5f2e06bcfd054431aa66df7" Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.697580 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerStarted","Data":"5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.697926 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerStarted","Data":"d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701191 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"22de209a085b64f1dc864f6132975da72e6815afb420eb63a158d8e0a94a63be"} Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701080 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="22de209a085b64f1dc864f6132975da72e6815afb420eb63a158d8e0a94a63be" exitCode=0 Feb 18 00:22:42 crc kubenswrapper[5121]: I0218 00:22:42.701404 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerStarted","Data":"013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce"} Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.720793 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283" exitCode=0 Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.721340 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"5f3dd63714433cbd47a83e6d85ba5473eb2c56d28db2c2806a35d5fd1f1d5283"} Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.728338 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="4420ce8e6a7792586bacf59ad0d263c408b38b382c0cbdf405971da2f09df69c" exitCode=0 Feb 18 00:22:43 crc kubenswrapper[5121]: I0218 00:22:43.728455 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"4420ce8e6a7792586bacf59ad0d263c408b38b382c0cbdf405971da2f09df69c"} Feb 18 00:22:44 crc kubenswrapper[5121]: I0218 00:22:44.740075 5121 generic.go:358] "Generic (PLEG): container finished" podID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerID="2e10cb1dbf24d828471fa68ad727412918571d8fb49f9411586af58d4e259b57" exitCode=0 Feb 18 00:22:44 crc kubenswrapper[5121]: I0218 00:22:44.740120 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"2e10cb1dbf24d828471fa68ad727412918571d8fb49f9411586af58d4e259b57"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.087787 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225176 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225240 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225332 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") pod \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\" (UID: \"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2\") " Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.225936 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle" (OuterVolumeSpecName: "bundle") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.238668 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util" (OuterVolumeSpecName: "util") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.239372 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx" (OuterVolumeSpecName: "kube-api-access-tmvwx") pod "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" (UID: "c2ab26d1-726f-4bc4-85ee-a3cf24d701a2"). InnerVolumeSpecName "kube-api-access-tmvwx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326384 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tmvwx\" (UniqueName: \"kubernetes.io/projected/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-kube-api-access-tmvwx\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326827 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.326840 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c2ab26d1-726f-4bc4-85ee-a3cf24d701a2-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.758640 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="cb5dc786e0f0b26b514b716554567d503b5bed0c885811d53daec6278b28f7b6" exitCode=0 Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.758876 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"cb5dc786e0f0b26b514b716554567d503b5bed0c885811d53daec6278b28f7b6"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768351 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" event={"ID":"c2ab26d1-726f-4bc4-85ee-a3cf24d701a2","Type":"ContainerDied","Data":"013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce"} Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768400 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="013513444f046ec1ac230b4757401f5f665f66b53937282752ae185cd05000ce" Feb 18 00:22:46 crc kubenswrapper[5121]: I0218 00:22:46.768432 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572tm88h" Feb 18 00:22:47 crc kubenswrapper[5121]: I0218 00:22:47.783053 5121 generic.go:358] "Generic (PLEG): container finished" podID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerID="56e17c7afb3cf25c3d808d2266252b6800c995d5ec17b98403cebc71b4c5f642" exitCode=0 Feb 18 00:22:47 crc kubenswrapper[5121]: I0218 00:22:47.783202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"56e17c7afb3cf25c3d808d2266252b6800c995d5ec17b98403cebc71b4c5f642"} Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.147079 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.286073 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.286859 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.287080 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") pod \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\" (UID: \"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad\") " Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.288688 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle" (OuterVolumeSpecName: "bundle") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.295477 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h" (OuterVolumeSpecName: "kube-api-access-m6l8h") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "kube-api-access-m6l8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.305963 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util" (OuterVolumeSpecName: "util") pod "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" (UID: "e7ed8c65-bc15-4ac0-91be-fd93809fe9ad"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389527 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m6l8h\" (UniqueName: \"kubernetes.io/projected/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-kube-api-access-m6l8h\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389558 5121 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-util\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.389567 5121 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e7ed8c65-bc15-4ac0-91be-fd93809fe9ad-bundle\") on node \"crc\" DevicePath \"\"" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804095 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" event={"ID":"e7ed8c65-bc15-4ac0-91be-fd93809fe9ad","Type":"ContainerDied","Data":"d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994"} Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804154 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d682985a0241027f32fb57188153d2a14e5887298f5ba72e5ac0149307b31994" Feb 18 00:22:49 crc kubenswrapper[5121]: I0218 00:22:49.804339 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.988555 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989610 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989628 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989674 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989684 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989700 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989707 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="util" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989732 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989739 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989764 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989772 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989785 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989792 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="pull" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989904 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="c2ab26d1-726f-4bc4-85ee-a3cf24d701a2" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.989921 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e7ed8c65-bc15-4ac0-91be-fd93809fe9ad" containerName="extract" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.995320 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:56 crc kubenswrapper[5121]: I0218 00:22:56.999195 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-2wqmc\"" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.009111 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.103970 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.205035 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.230457 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk8c\" (UniqueName: \"kubernetes.io/projected/7fe2ffb0-1690-49b9-a86e-88e147ec4ca6-kube-api-access-llk8c\") pod \"interconnect-operator-78b9bd8798-mn48s\" (UID: \"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6\") " pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.313430 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.554532 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-mn48s"] Feb 18 00:22:57 crc kubenswrapper[5121]: W0218 00:22:57.556391 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fe2ffb0_1690_49b9_a86e_88e147ec4ca6.slice/crio-d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce WatchSource:0}: Error finding container d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce: Status 404 returned error can't find the container with id d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce Feb 18 00:22:57 crc kubenswrapper[5121]: I0218 00:22:57.881846 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" event={"ID":"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6","Type":"ContainerStarted","Data":"d72b0260b8367ad61a4a340abf7aabae77286720e05ce3b8a3d60e60b3a1cfce"} Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.228059 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.238833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.238954 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.241966 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-fsn2j\"" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.336045 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.336801 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.437800 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.437892 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.438614 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/24352f2e-20c2-4d2e-bd18-8fb703441b7b-runner\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.471752 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbnx\" (UniqueName: \"kubernetes.io/projected/24352f2e-20c2-4d2e-bd18-8fb703441b7b-kube-api-access-mlbnx\") pod \"service-telemetry-operator-794b5697c7-gnq9d\" (UID: \"24352f2e-20c2-4d2e-bd18-8fb703441b7b\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.558312 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.852296 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-gnq9d"] Feb 18 00:22:59 crc kubenswrapper[5121]: I0218 00:22:59.898101 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" event={"ID":"24352f2e-20c2-4d2e-bd18-8fb703441b7b","Type":"ContainerStarted","Data":"4fb4b78617caec80359836c04c6a8b217475eb66c126b9ce646a33a5fb209f9a"} Feb 18 00:23:10 crc kubenswrapper[5121]: I0218 00:23:10.987771 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" event={"ID":"7fe2ffb0-1690-49b9-a86e-88e147ec4ca6","Type":"ContainerStarted","Data":"21e004903732d0a96b12b11f2fa9555552057c187a0fd687b9659f17ced53748"} Feb 18 00:23:10 crc kubenswrapper[5121]: I0218 00:23:10.989450 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" event={"ID":"24352f2e-20c2-4d2e-bd18-8fb703441b7b","Type":"ContainerStarted","Data":"1caf7773b361d7cc0f3bd51335f7e76489fc1bd67336b9116be0bca433cee03f"} Feb 18 00:23:11 crc kubenswrapper[5121]: I0218 00:23:11.010732 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-mn48s" podStartSLOduration=1.8978910629999999 podStartE2EDuration="15.010710797s" podCreationTimestamp="2026-02-18 00:22:56 +0000 UTC" firstStartedPulling="2026-02-18 00:22:57.557793568 +0000 UTC m=+861.072251323" lastFinishedPulling="2026-02-18 00:23:10.670613322 +0000 UTC m=+874.185071057" observedRunningTime="2026-02-18 00:23:11.008144848 +0000 UTC m=+874.522602603" watchObservedRunningTime="2026-02-18 00:23:11.010710797 +0000 UTC m=+874.525168532" Feb 18 00:23:11 crc kubenswrapper[5121]: I0218 00:23:11.027981 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-gnq9d" podStartSLOduration=1.143850666 podStartE2EDuration="12.027959474s" podCreationTimestamp="2026-02-18 00:22:59 +0000 UTC" firstStartedPulling="2026-02-18 00:22:59.875273529 +0000 UTC m=+863.389731264" lastFinishedPulling="2026-02-18 00:23:10.759382337 +0000 UTC m=+874.273840072" observedRunningTime="2026-02-18 00:23:11.026138895 +0000 UTC m=+874.540596630" watchObservedRunningTime="2026-02-18 00:23:11.027959474 +0000 UTC m=+874.542417209" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.707513 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.718481 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.722286 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.722914 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.723028 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.723093 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.724005 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-mdl7b\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.724281 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.729342 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.737457 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823207 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823626 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823782 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.823901 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.824032 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.824247 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925229 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925490 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925572 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925691 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925787 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925878 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.925980 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.926175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.932957 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.933133 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.950305 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.950376 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.951227 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:31 crc kubenswrapper[5121]: I0218 00:23:31.955560 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"default-interconnect-55bf8d5cb-bh9xk\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:32 crc kubenswrapper[5121]: I0218 00:23:32.049549 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:23:32 crc kubenswrapper[5121]: I0218 00:23:32.509017 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:23:32 crc kubenswrapper[5121]: W0218 00:23:32.517161 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1a43f5a_93d6_4bf5_9595_4b068338fb4b.slice/crio-9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb WatchSource:0}: Error finding container 9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb: Status 404 returned error can't find the container with id 9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb Feb 18 00:23:33 crc kubenswrapper[5121]: I0218 00:23:33.172214 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerStarted","Data":"9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb"} Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.656395 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.664599 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.675472 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:23:37 crc kubenswrapper[5121]: I0218 00:23:37.681558 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:23:38 crc kubenswrapper[5121]: I0218 00:23:38.229397 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerStarted","Data":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} Feb 18 00:23:38 crc kubenswrapper[5121]: I0218 00:23:38.267369 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" podStartSLOduration=2.270373654 podStartE2EDuration="7.267340145s" podCreationTimestamp="2026-02-18 00:23:31 +0000 UTC" firstStartedPulling="2026-02-18 00:23:32.519173661 +0000 UTC m=+896.033631406" lastFinishedPulling="2026-02-18 00:23:37.516140122 +0000 UTC m=+901.030597897" observedRunningTime="2026-02-18 00:23:38.255024612 +0000 UTC m=+901.769482347" watchObservedRunningTime="2026-02-18 00:23:38.267340145 +0000 UTC m=+901.781797920" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.065535 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.090008 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.090225 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096227 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096350 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-kmb9h\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096459 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096522 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096235 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096474 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.096976 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.097284 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.097791 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.098137 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.202948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203005 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203064 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203227 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203341 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203382 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203488 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203549 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203597 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203625 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203692 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.203854 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305030 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305104 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305142 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305174 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305327 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.305366 5121 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305382 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.305531 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls podName:7acc81c6-6ef1-4c1d-ac51-c020076734e6 nodeName:}" failed. No retries permitted until 2026-02-18 00:23:42.805488587 +0000 UTC m=+906.319946352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7acc81c6-6ef1-4c1d-ac51-c020076734e6") : secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305705 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305815 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305906 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.305998 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306067 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306100 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306175 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.306825 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.307618 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.307715 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7acc81c6-6ef1-4c1d-ac51-c020076734e6-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.313827 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.314688 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7acc81c6-6ef1-4c1d-ac51-c020076734e6-config-out\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.315049 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.315107 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f5aa9db2d83b79f6d98384877ca4fd57474ec19f48c0eca6a401cef73b2c9bec/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.319251 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-web-config\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.328323 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-tls-assets\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.329394 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.352850 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-5322e812-0ecf-46b8-957d-d372b649cf87\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5322e812-0ecf-46b8-957d-d372b649cf87\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.355005 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26m6w\" (UniqueName: \"kubernetes.io/projected/7acc81c6-6ef1-4c1d-ac51-c020076734e6-kube-api-access-26m6w\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: I0218 00:23:42.815213 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.815371 5121 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 18 00:23:42 crc kubenswrapper[5121]: E0218 00:23:42.815450 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls podName:7acc81c6-6ef1-4c1d-ac51-c020076734e6 nodeName:}" failed. No retries permitted until 2026-02-18 00:23:43.815430073 +0000 UTC m=+907.329887808 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7acc81c6-6ef1-4c1d-ac51-c020076734e6") : secret "default-prometheus-proxy-tls" not found Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.830365 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.835285 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7acc81c6-6ef1-4c1d-ac51-c020076734e6-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7acc81c6-6ef1-4c1d-ac51-c020076734e6\") " pod="service-telemetry/prometheus-default-0" Feb 18 00:23:43 crc kubenswrapper[5121]: I0218 00:23:43.913881 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 18 00:23:44 crc kubenswrapper[5121]: I0218 00:23:44.155439 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 18 00:23:44 crc kubenswrapper[5121]: W0218 00:23:44.160773 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7acc81c6_6ef1_4c1d_ac51_c020076734e6.slice/crio-927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee WatchSource:0}: Error finding container 927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee: Status 404 returned error can't find the container with id 927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee Feb 18 00:23:44 crc kubenswrapper[5121]: I0218 00:23:44.271562 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"927d36dfeac85100e10ef4ddc6037cc13c7b0a90ca53151cf87150dbfe9cb4ee"} Feb 18 00:23:50 crc kubenswrapper[5121]: I0218 00:23:50.323617 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56"} Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.924491 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.949272 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:51 crc kubenswrapper[5121]: I0218 00:23:51.949394 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.067887 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.170122 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.189303 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq2vq\" (UniqueName: \"kubernetes.io/projected/37bc1d59-8b60-48c3-aabd-f9337333ef2b-kube-api-access-gq2vq\") pod \"default-snmp-webhook-6774d8dfbc-7plz2\" (UID: \"37bc1d59-8b60-48c3-aabd-f9337333ef2b\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.271214 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" Feb 18 00:23:52 crc kubenswrapper[5121]: I0218 00:23:52.776258 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2"] Feb 18 00:23:53 crc kubenswrapper[5121]: I0218 00:23:53.346470 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" event={"ID":"37bc1d59-8b60-48c3-aabd-f9337333ef2b","Type":"ContainerStarted","Data":"8619e4999ff9f2ab0ee03dc36aeb42442c2880e72a87b2b7f106b2391f128baa"} Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.787197 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.905175 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.905333 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.908833 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.909599 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.910311 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.910622 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.911133 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-6vp2p\"" Feb 18 00:23:55 crc kubenswrapper[5121]: I0218 00:23:55.912679 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033099 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033175 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033228 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033273 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033379 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033438 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033479 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033570 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.033694 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134900 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134943 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.134967 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135008 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135132 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.135184 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.135406 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.135491 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:56.635463685 +0000 UTC m=+920.149921430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136103 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136145 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.136164 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.139849 5121 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.140254 5121 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/66ff35c8059af9e4c4e52464365163bf5bb0a4b624024d7bea302fe3d0e72496/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.141140 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.141200 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-volume\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.145012 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-web-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.148276 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.148673 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-config-out\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.155751 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9hpz\" (UniqueName: \"kubernetes.io/projected/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-kube-api-access-j9hpz\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.155853 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.178451 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cd38d743-86e9-4f59-9032-0a6d45a4cb86\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: I0218 00:23:56.642079 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.642336 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:56 crc kubenswrapper[5121]: E0218 00:23:56.642456 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:57.642431169 +0000 UTC m=+921.156888904 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.378390 5121 generic.go:358] "Generic (PLEG): container finished" podID="7acc81c6-6ef1-4c1d-ac51-c020076734e6" containerID="6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56" exitCode=0 Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.378480 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerDied","Data":"6abd7e75779b55839559a7d2f7cf46bc4e336d6a45f00c07d7db5aff739cfc56"} Feb 18 00:23:57 crc kubenswrapper[5121]: I0218 00:23:57.655234 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:57 crc kubenswrapper[5121]: E0218 00:23:57.655493 5121 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:57 crc kubenswrapper[5121]: E0218 00:23:57.655571 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls podName:36845eb3-f7ec-4a0f-81ca-6650cc34a86d nodeName:}" failed. No retries permitted until 2026-02-18 00:23:59.655548602 +0000 UTC m=+923.170006367 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "36845eb3-f7ec-4a0f-81ca-6650cc34a86d") : secret "default-alertmanager-proxy-tls" not found Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.687216 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.695216 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/36845eb3-f7ec-4a0f-81ca-6650cc34a86d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"36845eb3-f7ec-4a0f-81ca-6650cc34a86d\") " pod="service-telemetry/alertmanager-default-0" Feb 18 00:23:59 crc kubenswrapper[5121]: I0218 00:23:59.828042 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.136610 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.317223 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.317331 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320402 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320806 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.320851 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.395628 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.497086 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.523964 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"auto-csr-approver-29522904-frzvq\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:00 crc kubenswrapper[5121]: I0218 00:24:00.634377 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:01 crc kubenswrapper[5121]: I0218 00:24:01.289233 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:24:01 crc kubenswrapper[5121]: W0218 00:24:01.431306 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb912abb_9dfb_4035_9eea_266ad0057af0.slice/crio-80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73 WatchSource:0}: Error finding container 80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73: Status 404 returned error can't find the container with id 80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73 Feb 18 00:24:01 crc kubenswrapper[5121]: W0218 00:24:01.473572 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36845eb3_f7ec_4a0f_81ca_6650cc34a86d.slice/crio-2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee WatchSource:0}: Error finding container 2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee: Status 404 returned error can't find the container with id 2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee Feb 18 00:24:01 crc kubenswrapper[5121]: I0218 00:24:01.475331 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.411427 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerStarted","Data":"80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.413600 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" event={"ID":"37bc1d59-8b60-48c3-aabd-f9337333ef2b","Type":"ContainerStarted","Data":"019479f7141b25a04ee0c3699d1e1a9765a5d52e9739da147c7c5f952a54a895"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.417633 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"2720544bb68f2d1a43f83dce7191688bcf7d6d8577ad3c363c7fa3bb301bbbee"} Feb 18 00:24:02 crc kubenswrapper[5121]: I0218 00:24:02.426436 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7plz2" podStartSLOduration=2.67786527 podStartE2EDuration="11.426414803s" podCreationTimestamp="2026-02-18 00:23:51 +0000 UTC" firstStartedPulling="2026-02-18 00:23:52.781344133 +0000 UTC m=+916.295801878" lastFinishedPulling="2026-02-18 00:24:01.529893676 +0000 UTC m=+925.044351411" observedRunningTime="2026-02-18 00:24:02.424856451 +0000 UTC m=+925.939314196" watchObservedRunningTime="2026-02-18 00:24:02.426414803 +0000 UTC m=+925.940872548" Feb 18 00:24:03 crc kubenswrapper[5121]: I0218 00:24:03.427756 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.447037 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerStarted","Data":"6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.449333 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"bd46fd6abe11b67ea91c607f7d6a4a27bbf2ef814f2c34b3732767d685580aae"} Feb 18 00:24:06 crc kubenswrapper[5121]: I0218 00:24:06.459581 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29522904-frzvq" podStartSLOduration=2.540293206 podStartE2EDuration="6.459562396s" podCreationTimestamp="2026-02-18 00:24:00 +0000 UTC" firstStartedPulling="2026-02-18 00:24:01.432705865 +0000 UTC m=+924.947163600" lastFinishedPulling="2026-02-18 00:24:05.351975055 +0000 UTC m=+928.866432790" observedRunningTime="2026-02-18 00:24:06.457960763 +0000 UTC m=+929.972418498" watchObservedRunningTime="2026-02-18 00:24:06.459562396 +0000 UTC m=+929.974020151" Feb 18 00:24:07 crc kubenswrapper[5121]: I0218 00:24:07.457504 5121 generic.go:358] "Generic (PLEG): container finished" podID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerID="6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff" exitCode=0 Feb 18 00:24:07 crc kubenswrapper[5121]: I0218 00:24:07.457943 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerDied","Data":"6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff"} Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.462985 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509449 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"a2317347a2371f7f72d7583626cc53fa549b6cc8af923a9a4fbb4f967bfe4e74"} Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509778 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.509564 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512299 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512678 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.512814 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-jq5n2\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.513875 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643316 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643434 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643470 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643564 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.643666 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745230 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745300 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745328 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745385 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.745431 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: E0218 00:24:08.745481 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:08 crc kubenswrapper[5121]: E0218 00:24:08.745558 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls podName:3a752ce6-d6e6-4222-9c73-8f79a4272c55 nodeName:}" failed. No retries permitted until 2026-02-18 00:24:09.245538355 +0000 UTC m=+932.759996090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" (UID: "3a752ce6-d6e6-4222-9c73-8f79a4272c55") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.746035 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a752ce6-d6e6-4222-9c73-8f79a4272c55-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.746604 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/3a752ce6-d6e6-4222-9c73-8f79a4272c55-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.752495 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.770488 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5n2sq\" (UniqueName: \"kubernetes.io/projected/3a752ce6-d6e6-4222-9c73-8f79a4272c55-kube-api-access-5n2sq\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.806297 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.946912 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") pod \"fb912abb-9dfb-4035-9eea-266ad0057af0\" (UID: \"fb912abb-9dfb-4035-9eea-266ad0057af0\") " Feb 18 00:24:08 crc kubenswrapper[5121]: I0218 00:24:08.955202 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28" (OuterVolumeSpecName: "kube-api-access-xgj28") pod "fb912abb-9dfb-4035-9eea-266ad0057af0" (UID: "fb912abb-9dfb-4035-9eea-266ad0057af0"). InnerVolumeSpecName "kube-api-access-xgj28". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.048102 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgj28\" (UniqueName: \"kubernetes.io/projected/fb912abb-9dfb-4035-9eea-266ad0057af0-kube-api-access-xgj28\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.250623 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:09 crc kubenswrapper[5121]: E0218 00:24:09.250827 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:09 crc kubenswrapper[5121]: E0218 00:24:09.250944 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls podName:3a752ce6-d6e6-4222-9c73-8f79a4272c55 nodeName:}" failed. No retries permitted until 2026-02-18 00:24:10.250922315 +0000 UTC m=+933.765380050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" (UID: "3a752ce6-d6e6-4222-9c73-8f79a4272c55") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475507 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522904-frzvq" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475565 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522904-frzvq" event={"ID":"fb912abb-9dfb-4035-9eea-266ad0057af0","Type":"ContainerDied","Data":"80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73"} Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.475700 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80af0a2b2998531b1191a28db54c33f939bcbbd7b8b2885564678b8847eada73" Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.529111 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:24:09 crc kubenswrapper[5121]: I0218 00:24:09.534806 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522898-b8lhd"] Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.266224 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.278335 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3a752ce6-d6e6-4222-9c73-8f79a4272c55-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-z2n4r\" (UID: \"3a752ce6-d6e6-4222-9c73-8f79a4272c55\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.324808 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" Feb 18 00:24:10 crc kubenswrapper[5121]: E0218 00:24:10.390172 5121 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36845eb3_f7ec_4a0f_81ca_6650cc34a86d.slice/crio-conmon-81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed.scope\": RecentStats: unable to find data in memory cache]" Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.501435 5121 generic.go:358] "Generic (PLEG): container finished" podID="36845eb3-f7ec-4a0f-81ca-6650cc34a86d" containerID="81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed" exitCode=0 Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.501486 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerDied","Data":"81a46725c602200ad5a251e6d9f589de5207655246c479bf40199866e889e1ed"} Feb 18 00:24:10 crc kubenswrapper[5121]: I0218 00:24:10.756066 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r"] Feb 18 00:24:10 crc kubenswrapper[5121]: W0218 00:24:10.761193 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a752ce6_d6e6_4222_9c73_8f79a4272c55.slice/crio-d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c WatchSource:0}: Error finding container d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c: Status 404 returned error can't find the container with id d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.031111 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035304 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035346 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.035507 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" containerName="oc" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.060164 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.060318 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.063154 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.063895 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182203 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182254 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182316 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182410 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.182435 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288367 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288546 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288597 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288697 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.288761 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.289135 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.289253 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls podName:91bcc3e0-8b13-4cb5-a115-01265bb95b3a nodeName:}" failed. No retries permitted until 2026-02-18 00:24:11.789227109 +0000 UTC m=+935.303684844 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" (UID: "91bcc3e0-8b13-4cb5-a115-01265bb95b3a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.290386 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.290716 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.299849 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.305785 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0752b905-c20c-4af0-a716-b5297e9ed6fc" path="/var/lib/kubelet/pods/0752b905-c20c-4af0-a716-b5297e9ed6fc/volumes" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.347434 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-khnnw\" (UniqueName: \"kubernetes.io/projected/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-kube-api-access-khnnw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.510166 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"d7988ad890a436acb873c476061ab0cb79a1c4299564f57254a2dfe4e642b73c"} Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.638303 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.655385 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.685318 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.801122 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: E0218 00:24:11.801260 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls podName:91bcc3e0-8b13-4cb5-a115-01265bb95b3a nodeName:}" failed. No retries permitted until 2026-02-18 00:24:12.801237708 +0000 UTC m=+936.315695443 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" (UID: "91bcc3e0-8b13-4cb5-a115-01265bb95b3a") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.800955 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.801879 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.801963 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.802061 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903368 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903503 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.903538 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.904696 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.904738 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:11 crc kubenswrapper[5121]: I0218 00:24:11.927829 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"community-operators-7j85x\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.009733 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.535301 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:12 crc kubenswrapper[5121]: W0218 00:24:12.556131 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28a327dc_9b2e_492e_b906_456dbc2fc6a8.slice/crio-2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d WatchSource:0}: Error finding container 2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d: Status 404 returned error can't find the container with id 2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.836259 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.845332 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/91bcc3e0-8b13-4cb5-a115-01265bb95b3a-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj\" (UID: \"91bcc3e0-8b13-4cb5-a115-01265bb95b3a\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:12 crc kubenswrapper[5121]: I0218 00:24:12.881353 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529583 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b" exitCode=0 Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529642 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b"} Feb 18 00:24:13 crc kubenswrapper[5121]: I0218 00:24:13.529729 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerStarted","Data":"2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d"} Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.342106 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.565594 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.565741 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.567949 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.567988 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.675948 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.675995 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676026 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676052 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.676074 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777783 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777841 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777933 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777960 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.777978 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: E0218 00:24:15.778135 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:15 crc kubenswrapper[5121]: E0218 00:24:15.778198 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls podName:0f0eb637-4674-4fad-bb8e-e0b7d5ac913b nodeName:}" failed. No retries permitted until 2026-02-18 00:24:16.278181869 +0000 UTC m=+939.792639604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" (UID: "0f0eb637-4674-4fad-bb8e-e0b7d5ac913b") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.778559 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.779000 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.784193 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:15 crc kubenswrapper[5121]: I0218 00:24:15.794903 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksjp4\" (UniqueName: \"kubernetes.io/projected/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-kube-api-access-ksjp4\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:16 crc kubenswrapper[5121]: I0218 00:24:16.286196 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:16 crc kubenswrapper[5121]: E0218 00:24:16.286384 5121 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:16 crc kubenswrapper[5121]: E0218 00:24:16.286754 5121 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls podName:0f0eb637-4674-4fad-bb8e-e0b7d5ac913b nodeName:}" failed. No retries permitted until 2026-02-18 00:24:17.286733646 +0000 UTC m=+940.801191371 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" (UID: "0f0eb637-4674-4fad-bb8e-e0b7d5ac913b") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.306679 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.316427 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f0eb637-4674-4fad-bb8e-e0b7d5ac913b-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94\" (UID: \"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:17 crc kubenswrapper[5121]: I0218 00:24:17.384104 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" Feb 18 00:24:18 crc kubenswrapper[5121]: I0218 00:24:18.244743 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj"] Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.469520 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94"] Feb 18 00:24:19 crc kubenswrapper[5121]: W0218 00:24:19.470455 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f0eb637_4674_4fad_bb8e_e0b7d5ac913b.slice/crio-7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9 WatchSource:0}: Error finding container 7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9: Status 404 returned error can't find the container with id 7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9 Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.573831 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"7ea6ca252d23bec9d59ea3de6588c8e9b5166bd128d4fe947b95c0e4277f92d9"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.575980 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"071b5e39dd4ac4914a52e120b41b6f781f26dfc0c5e96684364c62394b496601"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.577823 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4" exitCode=0 Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.577856 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.580681 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7acc81c6-6ef1-4c1d-ac51-c020076734e6","Type":"ContainerStarted","Data":"8ed2cd045ceb026d04ae32346ac2f322abf898e75bbf09092d3aa9b392e18a1b"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.586376 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"fbe403427ad4c94ed9ebfde76600b3526a2e3fd32848bd54582fde0e4ca7bfac"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.600836 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"50d14054aa7d1fa3bac45fca8f3330519f05dc6f5e47e7292cfa22441815e0e2"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.600875 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"d569ccb340feefe343714d179e6002dcef1c06be840690b75462843418dfb554"} Feb 18 00:24:19 crc kubenswrapper[5121]: I0218 00:24:19.629834 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=3.732579142 podStartE2EDuration="38.629817118s" podCreationTimestamp="2026-02-18 00:23:41 +0000 UTC" firstStartedPulling="2026-02-18 00:23:44.164326481 +0000 UTC m=+907.678784226" lastFinishedPulling="2026-02-18 00:24:19.061564467 +0000 UTC m=+942.576022202" observedRunningTime="2026-02-18 00:24:19.619829468 +0000 UTC m=+943.134287213" watchObservedRunningTime="2026-02-18 00:24:19.629817118 +0000 UTC m=+943.144274853" Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.636016 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"fe76f828030cc82e9fe77ba56db235ef3083eb14713524748290777b0e579992"} Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.643712 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerStarted","Data":"7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880"} Feb 18 00:24:20 crc kubenswrapper[5121]: I0218 00:24:20.675458 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7j85x" podStartSLOduration=4.140915838 podStartE2EDuration="9.675445332s" podCreationTimestamp="2026-02-18 00:24:11 +0000 UTC" firstStartedPulling="2026-02-18 00:24:13.530502157 +0000 UTC m=+937.044959902" lastFinishedPulling="2026-02-18 00:24:19.065031671 +0000 UTC m=+942.579489396" observedRunningTime="2026-02-18 00:24:20.673762437 +0000 UTC m=+944.188220172" watchObservedRunningTime="2026-02-18 00:24:20.675445332 +0000 UTC m=+944.189903067" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.011543 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.011585 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.066376 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.191270 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.968240 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.968861 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.973076 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 18 00:24:22 crc kubenswrapper[5121]: I0218 00:24:22.973843 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002466 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002606 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.002764 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.003088 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104203 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104315 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104466 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.104528 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.105587 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/de3c7540-7b8d-4e77-968d-68b42aecf4df-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.105664 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3c7540-7b8d-4e77-968d-68b42aecf4df-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.112713 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.119290 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/de3c7540-7b8d-4e77-968d-68b42aecf4df-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.131318 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ddkj\" (UniqueName: \"kubernetes.io/projected/de3c7540-7b8d-4e77-968d-68b42aecf4df-kube-api-access-2ddkj\") pod \"default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7\" (UID: \"de3c7540-7b8d-4e77-968d-68b42aecf4df\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.210327 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.210571 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.214906 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.289378 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306543 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306603 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306768 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.306791 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408814 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408880 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.408940 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.409003 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.409740 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/906f1c26-b94f-41a4-98f4-524412eb9029-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.410074 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/906f1c26-b94f-41a4-98f4-524412eb9029-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.413896 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/906f1c26-b94f-41a4-98f4-524412eb9029-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.444282 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6stnl\" (UniqueName: \"kubernetes.io/projected/906f1c26-b94f-41a4-98f4-524412eb9029-kube-api-access-6stnl\") pod \"default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv\" (UID: \"906f1c26-b94f-41a4-98f4-524412eb9029\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.533019 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.575105 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7"] Feb 18 00:24:23 crc kubenswrapper[5121]: W0218 00:24:23.585795 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde3c7540_7b8d_4e77_968d_68b42aecf4df.slice/crio-11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d WatchSource:0}: Error finding container 11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d: Status 404 returned error can't find the container with id 11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.675529 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"11a9f96caada59fdf68bb5f355b392a0fdc011e83c7f9ba83887a373351ab65d"} Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.914489 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:23 crc kubenswrapper[5121]: I0218 00:24:23.958936 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv"] Feb 18 00:24:23 crc kubenswrapper[5121]: W0218 00:24:23.966638 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod906f1c26_b94f_41a4_98f4_524412eb9029.slice/crio-37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10 WatchSource:0}: Error finding container 37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10: Status 404 returned error can't find the container with id 37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10 Feb 18 00:24:24 crc kubenswrapper[5121]: I0218 00:24:24.698518 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"0f6862c1c325e0eebb91cb6d731dcea060c881df94bf5f53b5a256341e498345"} Feb 18 00:24:24 crc kubenswrapper[5121]: I0218 00:24:24.700686 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"37e04a000a20d1613d3a64064315c804fe79afeefe460d8538375d89cf84cc10"} Feb 18 00:24:28 crc kubenswrapper[5121]: I0218 00:24:28.914163 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:28 crc kubenswrapper[5121]: I0218 00:24:28.973426 5121 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.755512 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"36845eb3-f7ec-4a0f-81ca-6650cc34a86d","Type":"ContainerStarted","Data":"a42d833080ebec1d54c021dfee797b5eabef0de6aea3b852f2b616a7a362a42c"} Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.815594 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.415989117 podStartE2EDuration="35.815572765s" podCreationTimestamp="2026-02-18 00:23:54 +0000 UTC" firstStartedPulling="2026-02-18 00:24:10.502549725 +0000 UTC m=+934.017007460" lastFinishedPulling="2026-02-18 00:24:25.902133343 +0000 UTC m=+949.416591108" observedRunningTime="2026-02-18 00:24:29.785726277 +0000 UTC m=+953.300184022" watchObservedRunningTime="2026-02-18 00:24:29.815572765 +0000 UTC m=+953.330030510" Feb 18 00:24:29 crc kubenswrapper[5121]: I0218 00:24:29.820252 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.767611 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.776680 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.780588 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.788202 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd"} Feb 18 00:24:30 crc kubenswrapper[5121]: I0218 00:24:30.793500 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70"} Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.718720 5121 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.760837 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:33 crc kubenswrapper[5121]: I0218 00:24:33.816314 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7j85x" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" containerID="cri-o://7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" gracePeriod=2 Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.545071 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.545171 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.841859 5121 generic.go:358] "Generic (PLEG): container finished" podID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerID="7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" exitCode=0 Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.841937 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880"} Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.913244 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.968535 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:34 crc kubenswrapper[5121]: I0218 00:24:34.968758 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" containerID="cri-o://999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" gracePeriod=30 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031566 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031601 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.031670 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") pod \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\" (UID: \"28a327dc-9b2e-492e-b906-456dbc2fc6a8\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.032626 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities" (OuterVolumeSpecName: "utilities") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.037503 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk" (OuterVolumeSpecName: "kube-api-access-6w7lk") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "kube-api-access-6w7lk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.092745 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28a327dc-9b2e-492e-b906-456dbc2fc6a8" (UID: "28a327dc-9b2e-492e-b906-456dbc2fc6a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132718 5121 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-utilities\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132759 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6w7lk\" (UniqueName: \"kubernetes.io/projected/28a327dc-9b2e-492e-b906-456dbc2fc6a8-kube-api-access-6w7lk\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.132770 5121 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28a327dc-9b2e-492e-b906-456dbc2fc6a8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.303579 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340208 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340522 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.340851 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341044 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341204 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341438 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341614 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") pod \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\" (UID: \"e1a43f5a-93d6-4bf5-9595-4b068338fb4b\") " Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.341977 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.342741 5121 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347349 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg" (OuterVolumeSpecName: "kube-api-access-2fxvg") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "kube-api-access-2fxvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347575 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.347759 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.352592 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.352888 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.353025 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "e1a43f5a-93d6-4bf5-9595-4b068338fb4b" (UID: "e1a43f5a-93d6-4bf5-9595-4b068338fb4b"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.360717 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361392 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-utilities" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361408 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-utilities" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361425 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361430 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361452 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361458 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361474 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-content" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361479 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="extract-content" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361586 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" containerName="registry-server" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.361607 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerName="default-interconnect" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.370358 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.370488 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444237 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444528 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444709 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444810 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.444920 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445006 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445107 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445230 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445297 5121 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445358 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445410 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445469 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fxvg\" (UniqueName: \"kubernetes.io/projected/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-kube-api-access-2fxvg\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.445526 5121 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/e1a43f5a-93d6-4bf5-9595-4b068338fb4b-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547271 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547374 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547414 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547446 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547479 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547526 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.547592 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.550627 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-config\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.551595 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552040 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552462 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.552891 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.554789 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/58efa647-6d57-485a-89c5-66d831cf05c5-sasl-users\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.566106 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pwwp\" (UniqueName: \"kubernetes.io/projected/58efa647-6d57-485a-89c5-66d831cf05c5-kube-api-access-9pwwp\") pod \"default-interconnect-55bf8d5cb-jpbx6\" (UID: \"58efa647-6d57-485a-89c5-66d831cf05c5\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.688809 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.854840 5121 generic.go:358] "Generic (PLEG): container finished" podID="906f1c26-b94f-41a4-98f4-524412eb9029" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.854965 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerDied","Data":"cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.855032 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"efda4950061f540a7ffd72f347e5c4a18557ebe47ccf07ffb1b1a9fab3211bae"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.856996 5121 scope.go:117] "RemoveContainer" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857436 5121 generic.go:358] "Generic (PLEG): container finished" podID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857516 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857571 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerDied","Data":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857642 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-bh9xk" event={"ID":"e1a43f5a-93d6-4bf5-9595-4b068338fb4b","Type":"ContainerDied","Data":"9eeae94b2371aca06b1fff878de03f353746d9ae39e51b7711cfeed085dac7eb"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.857714 5121 scope.go:117] "RemoveContainer" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.858222 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.863888 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7j85x" event={"ID":"28a327dc-9b2e-492e-b906-456dbc2fc6a8","Type":"ContainerDied","Data":"2e3da940cd5c3c685ff401faa083e5372327e9ce5d13bc825e172f3bffd4272d"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.863953 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7j85x" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873363 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873435 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerDied","Data":"1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.873510 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"6963696d1f9a84a281118d3169e896ad36a6021a3604a72ec4a3182e1c91767c"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.875085 5121 scope.go:117] "RemoveContainer" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.878206 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"cefa498b634ef72c709f357711a1f7d1acabd66398276efd8fbb5fcfe560ed88"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890560 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3c7540-7b8d-4e77-968d-68b42aecf4df" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890737 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerDied","Data":"061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.890774 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"d3bd4c142e299f09b3938e52b63ceb1cf56ec80e91344bf1cd5dc1a6097057bd"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.891745 5121 scope.go:117] "RemoveContainer" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.900238 5121 scope.go:117] "RemoveContainer" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: E0218 00:24:35.902241 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": container with ID starting with 999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044 not found: ID does not exist" containerID="999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.902283 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044"} err="failed to get container status \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": rpc error: code = NotFound desc = could not find container \"999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044\": container with ID starting with 999f2850877c0058fe2bc26db3018280d653e10c605e6fee21908c314db5a044 not found: ID does not exist" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.902311 5121 scope.go:117] "RemoveContainer" containerID="7c7ff59abc9ec33f884c9d7f3bb923ec3ed13b8e2db588f2ffe1ae367e8ed880" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909036 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" exitCode=0 Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909121 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerDied","Data":"43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.909187 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"57290db7656ed8c68e6185d925943f220702a9c2cba0ca8af863658c2a3f1ef0"} Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.910862 5121 scope.go:117] "RemoveContainer" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.943306 5121 scope.go:117] "RemoveContainer" containerID="41fcfcabb578c6900831983846be283ff2f84b626213226dad02d958bc28e3e4" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.952841 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.960543 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7j85x"] Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.970954 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" podStartSLOduration=9.147757179 podStartE2EDuration="24.970935783s" podCreationTimestamp="2026-02-18 00:24:11 +0000 UTC" firstStartedPulling="2026-02-18 00:24:18.85407422 +0000 UTC m=+942.368531975" lastFinishedPulling="2026-02-18 00:24:34.677252844 +0000 UTC m=+958.191710579" observedRunningTime="2026-02-18 00:24:35.964226022 +0000 UTC m=+959.478683757" watchObservedRunningTime="2026-02-18 00:24:35.970935783 +0000 UTC m=+959.485393518" Feb 18 00:24:35 crc kubenswrapper[5121]: I0218 00:24:35.980177 5121 scope.go:117] "RemoveContainer" containerID="98708d1792476c4c66f1f72b097c066ee9a0a22f45820501a8f20ad24e6ea16b" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.013608 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.018871 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-bh9xk"] Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.229173 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jpbx6"] Feb 18 00:24:36 crc kubenswrapper[5121]: W0218 00:24:36.229736 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58efa647_6d57_485a_89c5_66d831cf05c5.slice/crio-386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3 WatchSource:0}: Error finding container 386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3: Status 404 returned error can't find the container with id 386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3 Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.918282 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921179 5121 generic.go:358] "Generic (PLEG): container finished" podID="91bcc3e0-8b13-4cb5-a115-01265bb95b3a" containerID="ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247" exitCode=0 Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921302 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerDied","Data":"ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.921966 5121 scope.go:117] "RemoveContainer" containerID="ea9c1d08c8f8d83fd86966978b0aba41d00fad3352642150dede9ef268305247" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.923792 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.932159 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.934853 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.940341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" event={"ID":"58efa647-6d57-485a-89c5-66d831cf05c5","Type":"ContainerStarted","Data":"9f776baaa4499d30b59a74c49747fa59bd41448eb4eae9a5065731be6acbdf23"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.940405 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" event={"ID":"58efa647-6d57-485a-89c5-66d831cf05c5","Type":"ContainerStarted","Data":"386d8e4fbf0e4fd33ed26a95f4882ff851109e5fff09430fdcfae107868965c3"} Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.950734 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podStartSLOduration=3.3765267310000002 podStartE2EDuration="28.950716744s" podCreationTimestamp="2026-02-18 00:24:08 +0000 UTC" firstStartedPulling="2026-02-18 00:24:10.762806009 +0000 UTC m=+934.277263744" lastFinishedPulling="2026-02-18 00:24:36.336996002 +0000 UTC m=+959.851453757" observedRunningTime="2026-02-18 00:24:36.94463172 +0000 UTC m=+960.459089465" watchObservedRunningTime="2026-02-18 00:24:36.950716744 +0000 UTC m=+960.465174489" Feb 18 00:24:36 crc kubenswrapper[5121]: I0218 00:24:36.985281 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podStartSLOduration=1.675015187 podStartE2EDuration="13.98526786s" podCreationTimestamp="2026-02-18 00:24:23 +0000 UTC" firstStartedPulling="2026-02-18 00:24:23.968777279 +0000 UTC m=+947.483235014" lastFinishedPulling="2026-02-18 00:24:36.279029952 +0000 UTC m=+959.793487687" observedRunningTime="2026-02-18 00:24:36.981819946 +0000 UTC m=+960.496277691" watchObservedRunningTime="2026-02-18 00:24:36.98526786 +0000 UTC m=+960.499725595" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.017862 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podStartSLOduration=4.991627983 podStartE2EDuration="22.017841111s" podCreationTimestamp="2026-02-18 00:24:15 +0000 UTC" firstStartedPulling="2026-02-18 00:24:19.475016029 +0000 UTC m=+942.989473764" lastFinishedPulling="2026-02-18 00:24:36.501229147 +0000 UTC m=+960.015686892" observedRunningTime="2026-02-18 00:24:37.009815824 +0000 UTC m=+960.524273569" watchObservedRunningTime="2026-02-18 00:24:37.017841111 +0000 UTC m=+960.532298866" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.056347 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podStartSLOduration=2.326189145 podStartE2EDuration="15.056323724s" podCreationTimestamp="2026-02-18 00:24:22 +0000 UTC" firstStartedPulling="2026-02-18 00:24:23.587324524 +0000 UTC m=+947.101782259" lastFinishedPulling="2026-02-18 00:24:36.317459083 +0000 UTC m=+959.831916838" observedRunningTime="2026-02-18 00:24:37.05030199 +0000 UTC m=+960.564759735" watchObservedRunningTime="2026-02-18 00:24:37.056323724 +0000 UTC m=+960.570781469" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.077426 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-jpbx6" podStartSLOduration=3.077396254 podStartE2EDuration="3.077396254s" podCreationTimestamp="2026-02-18 00:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-18 00:24:37.068954985 +0000 UTC m=+960.583412740" watchObservedRunningTime="2026-02-18 00:24:37.077396254 +0000 UTC m=+960.591854039" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.279803 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28a327dc-9b2e-492e-b906-456dbc2fc6a8" path="/var/lib/kubelet/pods/28a327dc-9b2e-492e-b906-456dbc2fc6a8/volumes" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.280858 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a43f5a-93d6-4bf5-9595-4b068338fb4b" path="/var/lib/kubelet/pods/e1a43f5a-93d6-4bf5-9595-4b068338fb4b/volumes" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948189 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948262 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerDied","Data":"035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948554 5121 scope.go:117] "RemoveContainer" containerID="1a17d2698060c3ddee9e8085a1f7ef0e231eebb24c51006915d2339738b95536" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.948837 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.949209 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_service-telemetry(3a752ce6-d6e6-4222-9c73-8f79a4272c55)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podUID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.957339 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj" event={"ID":"91bcc3e0-8b13-4cb5-a115-01265bb95b3a","Type":"ContainerStarted","Data":"d6f65c4bca08644816b9ab16ef0e781c2411f29df16531d608f70f576a379950"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964062 5121 generic.go:358] "Generic (PLEG): container finished" podID="de3c7540-7b8d-4e77-968d-68b42aecf4df" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964157 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerDied","Data":"78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.964793 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.965102 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_service-telemetry(de3c7540-7b8d-4e77-968d-68b42aecf4df)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podUID="de3c7540-7b8d-4e77-968d-68b42aecf4df" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.972880 5121 generic.go:358] "Generic (PLEG): container finished" podID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.973144 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerDied","Data":"101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.973784 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.974131 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_service-telemetry(0f0eb637-4674-4fad-bb8e-e0b7d5ac913b)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podUID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.976910 5121 generic.go:358] "Generic (PLEG): container finished" podID="906f1c26-b94f-41a4-98f4-524412eb9029" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" exitCode=0 Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.977885 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerDied","Data":"bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a"} Feb 18 00:24:37 crc kubenswrapper[5121]: I0218 00:24:37.978032 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:37 crc kubenswrapper[5121]: E0218 00:24:37.978214 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_service-telemetry(906f1c26-b94f-41a4-98f4-524412eb9029)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podUID="906f1c26-b94f-41a4-98f4-524412eb9029" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.009377 5121 scope.go:117] "RemoveContainer" containerID="061e8c1678cd817f61b28998ae3f1648764b3082be965ead597793622fd3590d" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.072813 5121 scope.go:117] "RemoveContainer" containerID="43e33421e22a0d4479c9167feb574abca656d689757aaa79d48caa87ae16f3bd" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.108396 5121 scope.go:117] "RemoveContainer" containerID="cef934d8bf6d000d47ecedec366ce6e31918e8b6d0a671717f77c2d81d0b8d70" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.987083 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.987561 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_service-telemetry(de3c7540-7b8d-4e77-968d-68b42aecf4df)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" podUID="de3c7540-7b8d-4e77-968d-68b42aecf4df" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.989126 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.989541 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_service-telemetry(0f0eb637-4674-4fad-bb8e-e0b7d5ac913b)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" podUID="0f0eb637-4674-4fad-bb8e-e0b7d5ac913b" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.991022 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.991218 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_service-telemetry(906f1c26-b94f-41a4-98f4-524412eb9029)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" podUID="906f1c26-b94f-41a4-98f4-524412eb9029" Feb 18 00:24:38 crc kubenswrapper[5121]: I0218 00:24:38.995438 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:38 crc kubenswrapper[5121]: E0218 00:24:38.995745 5121 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_service-telemetry(3a752ce6-d6e6-4222-9c73-8f79a4272c55)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" podUID="3a752ce6-d6e6-4222-9c73-8f79a4272c55" Feb 18 00:24:42 crc kubenswrapper[5121]: I0218 00:24:42.661716 5121 scope.go:117] "RemoveContainer" containerID="6df8e5d37ed8641c59178b1b8167978f4db2c4f4c7a2d5703ab6d4d5d7849eea" Feb 18 00:24:50 crc kubenswrapper[5121]: I0218 00:24:50.271077 5121 scope.go:117] "RemoveContainer" containerID="bd7513e831a5c1f774d62a894222ec62ca4572ad3ca0fbd490c8fd74d274a05a" Feb 18 00:24:50 crc kubenswrapper[5121]: I0218 00:24:50.271700 5121 scope.go:117] "RemoveContainer" containerID="101ccde2ce27cdf36b906721414cf2108208409487f92f5571959958f9287b76" Feb 18 00:24:51 crc kubenswrapper[5121]: I0218 00:24:51.274626 5121 scope.go:117] "RemoveContainer" containerID="78e5aa28d34cc12da939d48a00759f3963c19cda9ac4e95288bb56773a776c5d" Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.091078 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7" event={"ID":"de3c7540-7b8d-4e77-968d-68b42aecf4df","Type":"ContainerStarted","Data":"e50909e265bf8cc9b537e1d96694f2bc08543f924a3141fc31bd355be65ae0a4"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.093822 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94" event={"ID":"0f0eb637-4674-4fad-bb8e-e0b7d5ac913b","Type":"ContainerStarted","Data":"937a181aeb5d024a886e741706ae7b0ea76c42904b602cdffcbbc6ca17c4fcdb"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.096080 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv" event={"ID":"906f1c26-b94f-41a4-98f4-524412eb9029","Type":"ContainerStarted","Data":"3c1d34b2fbe278dcf6aa69e7f6f959730166231d83dee8f99d1c10dbae02f0d4"} Feb 18 00:24:52 crc kubenswrapper[5121]: I0218 00:24:52.270859 5121 scope.go:117] "RemoveContainer" containerID="035489cc7a05e971e14362914d77774674b783d4655c0848ea40004a2d813394" Feb 18 00:24:53 crc kubenswrapper[5121]: I0218 00:24:53.108814 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-z2n4r" event={"ID":"3a752ce6-d6e6-4222-9c73-8f79a4272c55","Type":"ContainerStarted","Data":"067271c43756789b2f5a8776ee364b26c15cb8eac1f9152fa0904c40c95d5639"} Feb 18 00:25:04 crc kubenswrapper[5121]: I0218 00:25:04.548867 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:25:04 crc kubenswrapper[5121]: I0218 00:25:04.550149 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:25:05 crc kubenswrapper[5121]: I0218 00:25:05.326834 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.361848 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.362024 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.365145 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.366030 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.459544 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.459593 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.460097 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561188 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561257 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.561284 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.562584 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/660e52e5-64b3-47d1-b593-e7a50159a146-qdr-test-config\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.573532 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/660e52e5-64b3-47d1-b593-e7a50159a146-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.589261 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldq9m\" (UniqueName: \"kubernetes.io/projected/660e52e5-64b3-47d1-b593-e7a50159a146-kube-api-access-ldq9m\") pod \"qdr-test\" (UID: \"660e52e5-64b3-47d1-b593-e7a50159a146\") " pod="service-telemetry/qdr-test" Feb 18 00:25:06 crc kubenswrapper[5121]: I0218 00:25:06.696798 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 18 00:25:07 crc kubenswrapper[5121]: I0218 00:25:07.213813 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 18 00:25:08 crc kubenswrapper[5121]: I0218 00:25:08.222781 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"660e52e5-64b3-47d1-b593-e7a50159a146","Type":"ContainerStarted","Data":"f322c4567ab42e23d34367a80478f86ad3587ed43060904bf7d2bf2008455d92"} Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.293951 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"660e52e5-64b3-47d1-b593-e7a50159a146","Type":"ContainerStarted","Data":"17f89fea5d066b2617d6a297a852feecfa82932e483fbb6ba32f1827aed606cb"} Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.314235 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.618736681 podStartE2EDuration="10.314209138s" podCreationTimestamp="2026-02-18 00:25:05 +0000 UTC" firstStartedPulling="2026-02-18 00:25:07.226570486 +0000 UTC m=+990.741028221" lastFinishedPulling="2026-02-18 00:25:14.922042943 +0000 UTC m=+998.436500678" observedRunningTime="2026-02-18 00:25:15.30804177 +0000 UTC m=+998.822499595" watchObservedRunningTime="2026-02-18 00:25:15.314209138 +0000 UTC m=+998.828666903" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.630287 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.653369 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659419 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659630 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.659423 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.660151 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.660591 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.661875 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.669768 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690039 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690124 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690146 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690202 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690309 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690463 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.690499 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791626 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791707 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791738 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791771 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791802 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791909 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.791950 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793068 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793256 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793443 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793712 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793904 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.793960 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.821232 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"stf-smoketest-smoke1-vj4t5\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:15 crc kubenswrapper[5121]: I0218 00:25:15.985696 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.072407 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.081913 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.091186 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.199488 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.236369 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-vj4t5"] Feb 18 00:25:16 crc kubenswrapper[5121]: W0218 00:25:16.239392 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod940e8886_3e2e_46ea_b228_a4d1b058909f.slice/crio-5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb WatchSource:0}: Error finding container 5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb: Status 404 returned error can't find the container with id 5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.285426 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb"} Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.301488 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.322668 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"curl\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.430717 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:16 crc kubenswrapper[5121]: I0218 00:25:16.606331 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 18 00:25:17 crc kubenswrapper[5121]: I0218 00:25:17.296721 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerStarted","Data":"9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45"} Feb 18 00:25:18 crc kubenswrapper[5121]: I0218 00:25:18.310238 5121 generic.go:358] "Generic (PLEG): container finished" podID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerID="8507d0cfae59ce3abe3771ca9e2fb07a2a5834e143ffbda509bdca00c1c4c8fa" exitCode=0 Feb 18 00:25:18 crc kubenswrapper[5121]: I0218 00:25:18.310335 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerDied","Data":"8507d0cfae59ce3abe3771ca9e2fb07a2a5834e143ffbda509bdca00c1c4c8fa"} Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.531148 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.619689 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") pod \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\" (UID: \"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188\") " Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.631569 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l" (OuterVolumeSpecName: "kube-api-access-dsb5l") pod "a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" (UID: "a7b03f25-b4f9-4ccf-8d2e-03b352e2c188"). InnerVolumeSpecName "kube-api-access-dsb5l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.721442 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsb5l\" (UniqueName: \"kubernetes.io/projected/a7b03f25-b4f9-4ccf-8d2e-03b352e2c188-kube-api-access-dsb5l\") on node \"crc\" DevicePath \"\"" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.723965 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_a7b03f25-b4f9-4ccf-8d2e-03b352e2c188/curl/0.log" Feb 18 00:25:23 crc kubenswrapper[5121]: I0218 00:25:23.968719 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.373945 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"a7b03f25-b4f9-4ccf-8d2e-03b352e2c188","Type":"ContainerDied","Data":"9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45"} Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.373981 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cdd6671ec3d94d40952a3c32a37e7f8629157efde03c71053bfe97a56aa4d45" Feb 18 00:25:24 crc kubenswrapper[5121]: I0218 00:25:24.374043 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 18 00:25:25 crc kubenswrapper[5121]: I0218 00:25:25.398307 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103"} Feb 18 00:25:30 crc kubenswrapper[5121]: I0218 00:25:30.440270 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerStarted","Data":"fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c"} Feb 18 00:25:30 crc kubenswrapper[5121]: I0218 00:25:30.482470 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" podStartSLOduration=1.501063933 podStartE2EDuration="15.482449973s" podCreationTimestamp="2026-02-18 00:25:15 +0000 UTC" firstStartedPulling="2026-02-18 00:25:16.241353244 +0000 UTC m=+999.755810979" lastFinishedPulling="2026-02-18 00:25:30.222739284 +0000 UTC m=+1013.737197019" observedRunningTime="2026-02-18 00:25:30.477555701 +0000 UTC m=+1013.992013476" watchObservedRunningTime="2026-02-18 00:25:30.482449973 +0000 UTC m=+1013.996907718" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545131 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545506 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.545565 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.546463 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:25:34 crc kubenswrapper[5121]: I0218 00:25:34.546568 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" gracePeriod=600 Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.485991 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" exitCode=0 Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486061 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6"} Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486484 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} Feb 18 00:25:35 crc kubenswrapper[5121]: I0218 00:25:35.486505 5121 scope.go:117] "RemoveContainer" containerID="439db9843e142a2f5407c90d33596c9b7a84028175dd63c3376bc95723bc0bb2" Feb 18 00:25:54 crc kubenswrapper[5121]: I0218 00:25:54.097468 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.716604 5121 generic.go:358] "Generic (PLEG): container finished" podID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerID="5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103" exitCode=0 Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.716772 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103"} Feb 18 00:25:59 crc kubenswrapper[5121]: I0218 00:25:59.717988 5121 scope.go:117] "RemoveContainer" containerID="5882aaa4ed69970dc074218dd8b390059514428a6090fce34f38e1a16c4e3103" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.146900 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148578 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148626 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.148943 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a7b03f25-b4f9-4ccf-8d2e-03b352e2c188" containerName="curl" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.156911 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.159275 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.160362 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.162175 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.170173 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.217323 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.319040 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.352596 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"auto-csr-approver-29522906-hgbxw\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.494354 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:00 crc kubenswrapper[5121]: I0218 00:26:00.751573 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522906-hgbxw"] Feb 18 00:26:01 crc kubenswrapper[5121]: I0218 00:26:01.737004 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerStarted","Data":"b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4"} Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.761342 5121 generic.go:358] "Generic (PLEG): container finished" podID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerID="fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c" exitCode=0 Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.761778 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"fd838701cd40c66733f4729510708eb9a122433555e66de62afeceb349f0d48c"} Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.773336 5121 generic.go:358] "Generic (PLEG): container finished" podID="27e98045-8793-4239-ae6e-54ff007c2064" containerID="4b8afd9f2027745ae23d87e7d030ba5e12a46b4b0b9aa4263286e99681437f8a" exitCode=0 Feb 18 00:26:02 crc kubenswrapper[5121]: I0218 00:26:02.773439 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerDied","Data":"4b8afd9f2027745ae23d87e7d030ba5e12a46b4b0b9aa4263286e99681437f8a"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.136859 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.143005 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191580 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191695 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.191729 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192571 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192660 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192696 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") pod \"27e98045-8793-4239-ae6e-54ff007c2064\" (UID: \"27e98045-8793-4239-ae6e-54ff007c2064\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192719 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.192806 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") pod \"940e8886-3e2e-46ea-b228-a4d1b058909f\" (UID: \"940e8886-3e2e-46ea-b228-a4d1b058909f\") " Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.200879 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q" (OuterVolumeSpecName: "kube-api-access-dwz8q") pod "27e98045-8793-4239-ae6e-54ff007c2064" (UID: "27e98045-8793-4239-ae6e-54ff007c2064"). InnerVolumeSpecName "kube-api-access-dwz8q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.204993 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597" (OuterVolumeSpecName: "kube-api-access-st597") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "kube-api-access-st597". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.211604 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.212112 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.215565 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.222452 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.222678 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.233813 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "940e8886-3e2e-46ea-b228-a4d1b058909f" (UID: "940e8886-3e2e-46ea-b228-a4d1b058909f"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294467 5121 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294531 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dwz8q\" (UniqueName: \"kubernetes.io/projected/27e98045-8793-4239-ae6e-54ff007c2064-kube-api-access-dwz8q\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294554 5121 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294574 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-st597\" (UniqueName: \"kubernetes.io/projected/940e8886-3e2e-46ea-b228-a4d1b058909f-kube-api-access-st597\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294592 5121 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294609 5121 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294625 5121 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.294642 5121 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/940e8886-3e2e-46ea-b228-a4d1b058909f-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795276 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795268 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-vj4t5" event={"ID":"940e8886-3e2e-46ea-b228-a4d1b058909f","Type":"ContainerDied","Data":"5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.795734 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aefb141ec48648a7568db6c102b057b6aae597481872235e7c03d13c52f32eb" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799268 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" event={"ID":"27e98045-8793-4239-ae6e-54ff007c2064","Type":"ContainerDied","Data":"b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4"} Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799369 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b204090bb33d92bffc7a43551fb76ccdc122441e471eb713e584745a4b067fe4" Feb 18 00:26:04 crc kubenswrapper[5121]: I0218 00:26:04.799460 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522906-hgbxw" Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.228288 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.240277 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522900-85n6k"] Feb 18 00:26:05 crc kubenswrapper[5121]: I0218 00:26:05.285286 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d8c4383-cf7d-4c99-badf-42f433b91870" path="/var/lib/kubelet/pods/6d8c4383-cf7d-4c99-badf-42f433b91870/volumes" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.039762 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vj4t5_940e8886-3e2e-46ea-b228-a4d1b058909f/smoketest-collectd/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.298384 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-vj4t5_940e8886-3e2e-46ea-b228-a4d1b058909f/smoketest-ceilometer/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.537395 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-jpbx6_58efa647-6d57-485a-89c5-66d831cf05c5/default-interconnect/0.log" Feb 18 00:26:06 crc kubenswrapper[5121]: I0218 00:26:06.778276 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_3a752ce6-d6e6-4222-9c73-8f79a4272c55/bridge/2.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.096633 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-z2n4r_3a752ce6-d6e6-4222-9c73-8f79a4272c55/sg-core/0.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.387291 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_de3c7540-7b8d-4e77-968d-68b42aecf4df/bridge/2.log" Feb 18 00:26:07 crc kubenswrapper[5121]: I0218 00:26:07.706769 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6f6c6b8676-76gw7_de3c7540-7b8d-4e77-968d-68b42aecf4df/sg-core/0.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.282837 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj_91bcc3e0-8b13-4cb5-a115-01265bb95b3a/bridge/1.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.616868 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-6x6sj_91bcc3e0-8b13-4cb5-a115-01265bb95b3a/sg-core/0.log" Feb 18 00:26:08 crc kubenswrapper[5121]: I0218 00:26:08.974191 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_906f1c26-b94f-41a4-98f4-524412eb9029/bridge/2.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.255792 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7757d45944-l2bbv_906f1c26-b94f-41a4-98f4-524412eb9029/sg-core/0.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.573780 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_0f0eb637-4674-4fad-bb8e-e0b7d5ac913b/bridge/2.log" Feb 18 00:26:09 crc kubenswrapper[5121]: I0218 00:26:09.899062 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-pxf94_0f0eb637-4674-4fad-bb8e-e0b7d5ac913b/sg-core/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.238398 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zh9kd_a9bb59e6-a92e-442e-87e6-b7331ba07de6/operator/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.549059 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7acc81c6-6ef1-4c1d-ac51-c020076734e6/prometheus/0.log" Feb 18 00:26:13 crc kubenswrapper[5121]: I0218 00:26:13.851975 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f3bc26d0-c80d-412d-9370-b821cdb7c2d7/elasticsearch/0.log" Feb 18 00:26:14 crc kubenswrapper[5121]: I0218 00:26:14.139211 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7plz2_37bc1d59-8b60-48c3-aabd-f9337333ef2b/prometheus-webhook-snmp/0.log" Feb 18 00:26:14 crc kubenswrapper[5121]: I0218 00:26:14.491426 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_36845eb3-f7ec-4a0f-81ca-6650cc34a86d/alertmanager/0.log" Feb 18 00:26:27 crc kubenswrapper[5121]: I0218 00:26:27.409325 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-gnq9d_24352f2e-20c2-4d2e-bd18-8fb703441b7b/operator/0.log" Feb 18 00:26:30 crc kubenswrapper[5121]: I0218 00:26:30.694107 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-zh9kd_a9bb59e6-a92e-442e-87e6-b7331ba07de6/operator/0.log" Feb 18 00:26:31 crc kubenswrapper[5121]: I0218 00:26:31.001162 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_660e52e5-64b3-47d1-b593-e7a50159a146/qdr/0.log" Feb 18 00:26:42 crc kubenswrapper[5121]: I0218 00:26:42.818893 5121 scope.go:117] "RemoveContainer" containerID="2772c03a3bd634ef4a9b0f93f7a4ca54d3598f6d92857ea841fed48a41f5f618" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.337734 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.338942 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339044 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339093 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339101 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339134 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339142 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339268 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-collectd" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339290 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="940e8886-3e2e-46ea-b228-a4d1b058909f" containerName="smoketest-ceilometer" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.339304 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="27e98045-8793-4239-ae6e-54ff007c2064" containerName="oc" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.488833 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.488946 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.490757 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-lrqpp\"/\"default-dockercfg-85c9n\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.494109 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lrqpp\"/\"openshift-service-ca.crt\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.494776 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-lrqpp\"/\"kube-root-ca.crt\"" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.579984 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.580223 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.681467 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.681598 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.682252 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.715501 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"must-gather-f5znv\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.807007 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:26:55 crc kubenswrapper[5121]: I0218 00:26:55.989173 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:26:55 crc kubenswrapper[5121]: W0218 00:26:55.994314 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf78fc17_7be1_4f86_a7aa_dc6b3498c1e1.slice/crio-7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe WatchSource:0}: Error finding container 7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe: Status 404 returned error can't find the container with id 7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe Feb 18 00:26:56 crc kubenswrapper[5121]: I0218 00:26:56.257826 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"7934bcf4bc29c85f2777578796b1a5b7e479a9aa3a5275f6fc153d98a371a8fe"} Feb 18 00:27:03 crc kubenswrapper[5121]: I0218 00:27:03.327246 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} Feb 18 00:27:04 crc kubenswrapper[5121]: I0218 00:27:04.337132 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerStarted","Data":"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d"} Feb 18 00:27:34 crc kubenswrapper[5121]: I0218 00:27:34.544571 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:27:34 crc kubenswrapper[5121]: I0218 00:27:34.545339 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.148100 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-djfbc_efe976a0-6ea6-4283-8b7c-97caa4f2111b/control-plane-machine-set-operator/0.log" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.263605 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-hfw2k_005aa352-e543-4bfd-ba57-b2cb37eb98f6/machine-api-operator/0.log" Feb 18 00:27:51 crc kubenswrapper[5121]: I0218 00:27:51.306889 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-hfw2k_005aa352-e543-4bfd-ba57-b2cb37eb98f6/kube-rbac-proxy/0.log" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.145828 5121 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-lrqpp/must-gather-f5znv" podStartSLOduration=58.276304041 podStartE2EDuration="1m5.145801176s" podCreationTimestamp="2026-02-18 00:26:55 +0000 UTC" firstStartedPulling="2026-02-18 00:26:55.996303333 +0000 UTC m=+1099.510761068" lastFinishedPulling="2026-02-18 00:27:02.865800428 +0000 UTC m=+1106.380258203" observedRunningTime="2026-02-18 00:27:04.361944261 +0000 UTC m=+1107.876401996" watchObservedRunningTime="2026-02-18 00:28:00.145801176 +0000 UTC m=+1163.660258971" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.156752 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.172963 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.174568 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179282 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179579 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.179737 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.266686 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.369054 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.399267 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"auto-csr-approver-29522908-5vgns\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.503155 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:00 crc kubenswrapper[5121]: I0218 00:28:00.999458 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522908-5vgns"] Feb 18 00:28:01 crc kubenswrapper[5121]: W0218 00:28:01.003979 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda618abb0_765d_4bea_a66c_df0ca59df619.slice/crio-50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e WatchSource:0}: Error finding container 50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e: Status 404 returned error can't find the container with id 50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e Feb 18 00:28:01 crc kubenswrapper[5121]: I0218 00:28:01.285052 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerStarted","Data":"50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e"} Feb 18 00:28:03 crc kubenswrapper[5121]: I0218 00:28:03.296488 5121 generic.go:358] "Generic (PLEG): container finished" podID="a618abb0-765d-4bea-a66c-df0ca59df619" containerID="4fa4fdc6c1a12fdaa840796c9add2b2cb68190d125123e63c8c4a08a83ec537c" exitCode=0 Feb 18 00:28:03 crc kubenswrapper[5121]: I0218 00:28:03.296561 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerDied","Data":"4fa4fdc6c1a12fdaa840796c9add2b2cb68190d125123e63c8c4a08a83ec537c"} Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.545310 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.545807 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.554386 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.635259 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") pod \"a618abb0-765d-4bea-a66c-df0ca59df619\" (UID: \"a618abb0-765d-4bea-a66c-df0ca59df619\") " Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.646904 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9" (OuterVolumeSpecName: "kube-api-access-s2wf9") pod "a618abb0-765d-4bea-a66c-df0ca59df619" (UID: "a618abb0-765d-4bea-a66c-df0ca59df619"). InnerVolumeSpecName "kube-api-access-s2wf9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.737631 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s2wf9\" (UniqueName: \"kubernetes.io/projected/a618abb0-765d-4bea-a66c-df0ca59df619-kube-api-access-s2wf9\") on node \"crc\" DevicePath \"\"" Feb 18 00:28:04 crc kubenswrapper[5121]: I0218 00:28:04.935259 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-mkxwj_8dac9f2e-68b4-409b-9fd2-bfc0bd928235/cert-manager-controller/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.084419 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-9qb4h_b1211244-3ab3-496b-9610-d2c6d4943528/cert-manager-webhook/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.088474 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-n4bv6_244cd2fe-9d19-45ba-9d3c-2fa6d153f27c/cert-manager-cainjector/0.log" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312437 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522908-5vgns" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312463 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522908-5vgns" event={"ID":"a618abb0-765d-4bea-a66c-df0ca59df619","Type":"ContainerDied","Data":"50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e"} Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.312899 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50f5f7c50448d23bfcd00fc865c98578df33261e23764b2f11092d14b59b0a2e" Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.622374 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:28:05 crc kubenswrapper[5121]: I0218 00:28:05.627172 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522902-4gc7s"] Feb 18 00:28:07 crc kubenswrapper[5121]: I0218 00:28:07.283663 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e811a594-9ca7-4167-807e-e39bd75b7912" path="/var/lib/kubelet/pods/e811a594-9ca7-4167-807e-e39bd75b7912/volumes" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.700742 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-s7jq7_ac0aed84-6c11-41de-9f31-3a7b2a313944/prometheus-operator/0.log" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.799061 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d_5551a95c-fb98-465f-ba4f-3eacc393a47b/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:19 crc kubenswrapper[5121]: I0218 00:28:19.893494 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd_34785a14-a8e1-49c9-bcca-3996487db06f/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:20 crc kubenswrapper[5121]: I0218 00:28:20.010993 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-p6t4z_2277040f-ef0e-4742-a923-fff6ccf3e5aa/operator/0.log" Feb 18 00:28:20 crc kubenswrapper[5121]: I0218 00:28:20.087965 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6hzks_e476d06d-6937-425a-b4b9-ef90c4e141f5/perses-operator/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544130 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544618 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.544678 5121 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.545302 5121 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} pod="openshift-machine-config-operator/machine-config-daemon-ss65g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.545358 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" containerID="cri-o://1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519" gracePeriod=600 Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.756479 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.944093 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:34 crc kubenswrapper[5121]: I0218 00:28:34.946971 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.003171 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.168486 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.219604 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.222041 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1zf959_763c3704-8ae0-4b52-9eb0-2dbef76acc66/extract/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.376764 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.522090 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.536387 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539450 5121 generic.go:358] "Generic (PLEG): container finished" podID="ce10664c-304a-460f-819a-bf71f3517fb3" containerID="1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519" exitCode=0 Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539524 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerDied","Data":"1433c34a7aead13ddc8baadb707b9feb663d1867abab2d3a4a2d8e2f07ec5519"} Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539567 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" event={"ID":"ce10664c-304a-460f-819a-bf71f3517fb3","Type":"ContainerStarted","Data":"c3d9d582193e7e4195b0e4460b1abc7ca6d2cdfc92b48b41f1d065c10ff1e53a"} Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.539591 5121 scope.go:117] "RemoveContainer" containerID="a3dd9dfe9a35eff090431f299663e39dd1ae0a141bf7651e239d0ba22d1fb6e6" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.589029 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.724024 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/pull/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.727820 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/util/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.762009 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fwcq2n_e7ed8c65-bc15-4ac0-91be-fd93809fe9ad/extract/0.log" Feb 18 00:28:35 crc kubenswrapper[5121]: I0218 00:28:35.908609 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.055389 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.078836 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.115502 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.212053 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.275864 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.279625 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5m8n59_73314776-9f0b-451b-a26b-15edd18cc220/extract/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.424018 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.563134 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.586441 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.647438 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.766089 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/pull/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.819747 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/extract/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.821880 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08pgg7s_a138e59c-43ff-4154-897a-b070bedb8045/util/0.log" Feb 18 00:28:36 crc kubenswrapper[5121]: I0218 00:28:36.946111 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.087363 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.102942 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.103114 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.282944 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.310416 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.470295 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-5hnxm_b3bb7195-d543-4fba-bbe3-661b888f6ab3/registry-server/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.488710 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.622840 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.654317 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.668257 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.784985 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.802972 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9dxsb_51dcc4ed-63a2-4a92-936e-8ef22eca20d6/kube-multus/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.803817 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.814783 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.866932 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-content/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.886151 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/extract-utilities/0.log" Feb 18 00:28:37 crc kubenswrapper[5121]: I0218 00:28:37.932813 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-kdn9c_2265e28f-7cec-4dde-b4c4-be79e7d2ccd2/marketplace-operator/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.068485 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-m24xj_17b15350-ab27-4821-bfb5-2ca12b36c32d/registry-server/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.114569 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.236518 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.252033 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.252158 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.411213 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-content/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.415449 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/extract-utilities/0.log" Feb 18 00:28:38 crc kubenswrapper[5121]: I0218 00:28:38.634647 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-svl96_7f3e3949-ddb8-4d79-8063-8e319147d2b5/registry-server/0.log" Feb 18 00:28:42 crc kubenswrapper[5121]: I0218 00:28:42.985028 5121 scope.go:117] "RemoveContainer" containerID="b11f5a73cbf91d419fed64da70dfe6c9e158164e96434325df36174760c790eb" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.388334 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-s7jq7_ac0aed84-6c11-41de-9f31-3a7b2a313944/prometheus-operator/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.427336 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-qxr7d_5551a95c-fb98-465f-ba4f-3eacc393a47b/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.442698 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-9bfd4c6c5-sv8rd_34785a14-a8e1-49c9-bcca-3996487db06f/prometheus-operator-admission-webhook/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.547827 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-p6t4z_2277040f-ef0e-4742-a923-fff6ccf3e5aa/operator/0.log" Feb 18 00:28:51 crc kubenswrapper[5121]: I0218 00:28:51.588675 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-6hzks_e476d06d-6937-425a-b4b9-ef90c4e141f5/perses-operator/0.log" Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.041089 5121 generic.go:358] "Generic (PLEG): container finished" podID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" exitCode=0 Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.041245 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-lrqpp/must-gather-f5znv" event={"ID":"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1","Type":"ContainerDied","Data":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.042501 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:32 crc kubenswrapper[5121]: I0218 00:29:32.645909 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/gather/0.log" Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.993744 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.995206 5121 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-lrqpp/must-gather-f5znv" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" containerID="cri-o://101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" gracePeriod=2 Feb 18 00:29:38 crc kubenswrapper[5121]: I0218 00:29:38.996934 5121 status_manager.go:895] "Failed to get status for pod" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" pod="openshift-must-gather-lrqpp/must-gather-f5znv" err="pods \"must-gather-f5znv\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-lrqpp\": no relationship found between node 'crc' and this object" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.008554 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-lrqpp/must-gather-f5znv"] Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.427135 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/copy/0.log" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.428512 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.524046 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") pod \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.525024 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") pod \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\" (UID: \"df78fc17-7be1-4f86-a7aa-dc6b3498c1e1\") " Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.535054 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx" (OuterVolumeSpecName: "kube-api-access-xlvbx") pod "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" (UID: "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1"). InnerVolumeSpecName "kube-api-access-xlvbx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.576495 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" (UID: "df78fc17-7be1-4f86-a7aa-dc6b3498c1e1"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.627378 5121 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:39 crc kubenswrapper[5121]: I0218 00:29:39.627442 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xlvbx\" (UniqueName: \"kubernetes.io/projected/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1-kube-api-access-xlvbx\") on node \"crc\" DevicePath \"\"" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.114207 5121 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-lrqpp_must-gather-f5znv_df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/copy/0.log" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115115 5121 generic.go:358] "Generic (PLEG): container finished" podID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" exitCode=143 Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115172 5121 scope.go:117] "RemoveContainer" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.115230 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-lrqpp/must-gather-f5znv" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.133517 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.194043 5121 scope.go:117] "RemoveContainer" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: E0218 00:29:40.195752 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": container with ID starting with 101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d not found: ID does not exist" containerID="101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.195804 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d"} err="failed to get container status \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": rpc error: code = NotFound desc = could not find container \"101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d\": container with ID starting with 101dd7b0c31fb639c9d907545a909578ae95c53b508c831cea8d2b443a82098d not found: ID does not exist" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.195832 5121 scope.go:117] "RemoveContainer" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: E0218 00:29:40.196209 5121 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": container with ID starting with 7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f not found: ID does not exist" containerID="7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f" Feb 18 00:29:40 crc kubenswrapper[5121]: I0218 00:29:40.196243 5121 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f"} err="failed to get container status \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": rpc error: code = NotFound desc = could not find container \"7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f\": container with ID starting with 7ad46e3b957c614bf41e245b6a98e8612745b34934e28c2847b6eee20e03ff0f not found: ID does not exist" Feb 18 00:29:41 crc kubenswrapper[5121]: I0218 00:29:41.285412 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" path="/var/lib/kubelet/pods/df78fc17-7be1-4f86-a7aa-dc6b3498c1e1/volumes" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.147754 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149186 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149298 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149366 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149380 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149425 5121 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149437 5121 state_mem.go:107] "Deleted CPUSet assignment" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149617 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="gather" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149636 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="a618abb0-765d-4bea-a66c-df0ca59df619" containerName="oc" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.149646 5121 memory_manager.go:356] "RemoveStaleState removing state" podUID="df78fc17-7be1-4f86-a7aa-dc6b3498c1e1" containerName="copy" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.159783 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.159923 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164389 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164712 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-5xhzn\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.164855 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.247069 5121 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.253740 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.257025 5121 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.257042 5121 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.264081 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.285703 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387704 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387788 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387821 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.387900 5121 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.409809 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"auto-csr-approver-29522910-csvv9\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.481333 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.489932 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.490191 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.490267 5121 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.491010 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.507576 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.509868 5121 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"collect-profiles-29522910-qctzz\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.582714 5121 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.814307 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz"] Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.820466 5121 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 18 00:30:00 crc kubenswrapper[5121]: I0218 00:30:00.970413 5121 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29522910-csvv9"] Feb 18 00:30:00 crc kubenswrapper[5121]: W0218 00:30:00.977277 5121 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod167070f2_72ad_4082_9db1_e89c473bb595.slice/crio-04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a WatchSource:0}: Error finding container 04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a: Status 404 returned error can't find the container with id 04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.315088 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerStarted","Data":"04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a"} Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.316940 5121 generic.go:358] "Generic (PLEG): container finished" podID="3a9a31b1-f17e-43bc-b696-c9c002d88629" containerID="061a84f737dd9d7fe6a81a97c1902f398021276c97a43a1fa12f0da19d4453d9" exitCode=0 Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.317064 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerDied","Data":"061a84f737dd9d7fe6a81a97c1902f398021276c97a43a1fa12f0da19d4453d9"} Feb 18 00:30:01 crc kubenswrapper[5121]: I0218 00:30:01.317135 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerStarted","Data":"7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012"} Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.662631 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757372 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757792 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.757926 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") pod \"3a9a31b1-f17e-43bc-b696-c9c002d88629\" (UID: \"3a9a31b1-f17e-43bc-b696-c9c002d88629\") " Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.758859 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume" (OuterVolumeSpecName: "config-volume") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.763772 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm" (OuterVolumeSpecName: "kube-api-access-27fkm") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "kube-api-access-27fkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.764150 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3a9a31b1-f17e-43bc-b696-c9c002d88629" (UID: "3a9a31b1-f17e-43bc-b696-c9c002d88629"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860268 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-27fkm\" (UniqueName: \"kubernetes.io/projected/3a9a31b1-f17e-43bc-b696-c9c002d88629-kube-api-access-27fkm\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860505 5121 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3a9a31b1-f17e-43bc-b696-c9c002d88629-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:02 crc kubenswrapper[5121]: I0218 00:30:02.860687 5121 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a9a31b1-f17e-43bc-b696-c9c002d88629-config-volume\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.336759 5121 generic.go:358] "Generic (PLEG): container finished" podID="167070f2-72ad-4082-9db1-e89c473bb595" containerID="c3385bbf6952539702a4158338b53a7c26a289bfe3dd0d2ae24acbf56df164bc" exitCode=0 Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.336812 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerDied","Data":"c3385bbf6952539702a4158338b53a7c26a289bfe3dd0d2ae24acbf56df164bc"} Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.339737 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.339725 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29522910-qctzz" event={"ID":"3a9a31b1-f17e-43bc-b696-c9c002d88629","Type":"ContainerDied","Data":"7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012"} Feb 18 00:30:03 crc kubenswrapper[5121]: I0218 00:30:03.340202 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adbf0111e51a68b3a6984d27c73a224451776210f0f4b8507058ef2f1b99012" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.608671 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.793897 5121 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") pod \"167070f2-72ad-4082-9db1-e89c473bb595\" (UID: \"167070f2-72ad-4082-9db1-e89c473bb595\") " Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.800932 5121 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl" (OuterVolumeSpecName: "kube-api-access-d9gtl") pod "167070f2-72ad-4082-9db1-e89c473bb595" (UID: "167070f2-72ad-4082-9db1-e89c473bb595"). InnerVolumeSpecName "kube-api-access-d9gtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 18 00:30:04 crc kubenswrapper[5121]: I0218 00:30:04.896699 5121 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d9gtl\" (UniqueName: \"kubernetes.io/projected/167070f2-72ad-4082-9db1-e89c473bb595-kube-api-access-d9gtl\") on node \"crc\" DevicePath \"\"" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364256 5121 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29522910-csvv9" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364341 5121 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29522910-csvv9" event={"ID":"167070f2-72ad-4082-9db1-e89c473bb595","Type":"ContainerDied","Data":"04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a"} Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.364407 5121 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04429ab72c88c22fe3d2074e4edb9dd22b322009410845a51a7b920a2721436a" Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.682918 5121 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:30:05 crc kubenswrapper[5121]: I0218 00:30:05.692102 5121 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29522904-frzvq"] Feb 18 00:30:07 crc kubenswrapper[5121]: I0218 00:30:07.295287 5121 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb912abb-9dfb-4035-9eea-266ad0057af0" path="/var/lib/kubelet/pods/fb912abb-9dfb-4035-9eea-266ad0057af0/volumes" Feb 18 00:30:34 crc kubenswrapper[5121]: I0218 00:30:34.544487 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:30:34 crc kubenswrapper[5121]: I0218 00:30:34.545258 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 18 00:30:43 crc kubenswrapper[5121]: I0218 00:30:43.183447 5121 scope.go:117] "RemoveContainer" containerID="6beee68d81b381d47e9cd853ec0193858c46c5b30478e3d0d603fe9cf78cf9ff" Feb 18 00:31:04 crc kubenswrapper[5121]: I0218 00:31:04.544457 5121 patch_prober.go:28] interesting pod/machine-config-daemon-ss65g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 18 00:31:04 crc kubenswrapper[5121]: I0218 00:31:04.545212 5121 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ss65g" podUID="ce10664c-304a-460f-819a-bf71f3517fb3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"