Feb 19 00:09:29 crc systemd[1]: Starting Kubernetes Kubelet... Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 19 00:09:30 crc kubenswrapper[5109]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.666186 5109 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.675038 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.676618 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.676812 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.676914 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677010 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677098 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677193 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677299 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677395 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677498 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677586 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677716 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677811 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.677916 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678013 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678107 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678194 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678288 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678384 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678487 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678590 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678731 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678826 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.678911 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679007 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679111 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679213 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679308 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679403 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679489 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679574 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679702 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679814 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.679925 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680019 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680142 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680238 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680344 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680434 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680530 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680626 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680768 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680869 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.680968 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681072 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681161 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681264 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681364 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681465 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681565 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681702 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681821 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.681912 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682037 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682167 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682273 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682365 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682464 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.682853 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683007 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683116 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683214 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683369 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683525 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683683 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683834 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.683970 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684104 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684253 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684388 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684511 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684679 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.684960 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685086 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685221 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685355 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685488 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685616 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685755 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.685896 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686022 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686144 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686281 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686404 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686537 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.686685 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.691317 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.691470 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.691608 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.691766 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.691970 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.692472 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.692793 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.692956 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693054 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693189 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693293 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693383 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693472 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693590 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693726 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693834 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.693942 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694041 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694129 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694225 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694345 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694438 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694545 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694670 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694771 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.694915 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695009 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695109 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695214 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695311 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695407 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695509 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695602 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695727 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695818 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.695929 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696029 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696148 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696252 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696349 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696445 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696536 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696686 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696801 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.696916 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697006 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697093 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697179 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697276 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697374 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697472 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697567 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697686 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697779 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697882 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.697980 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698077 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698169 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698258 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698381 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698509 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698688 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.698872 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699032 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699145 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699239 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699346 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699449 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699550 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699691 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.699935 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700041 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700162 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700274 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700377 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700468 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700573 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700768 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700874 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.700977 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701069 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701158 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701257 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701365 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701465 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.701568 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.701890 5109 flags.go:64] FLAG: --address="0.0.0.0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702032 5109 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702136 5109 flags.go:64] FLAG: --anonymous-auth="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702231 5109 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702343 5109 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702460 5109 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702567 5109 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702716 5109 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702821 5109 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.702916 5109 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703009 5109 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703100 5109 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703218 5109 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703314 5109 flags.go:64] FLAG: --cgroup-root="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703405 5109 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703512 5109 flags.go:64] FLAG: --client-ca-file="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703671 5109 flags.go:64] FLAG: --cloud-config="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703813 5109 flags.go:64] FLAG: --cloud-provider="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.703941 5109 flags.go:64] FLAG: --cluster-dns="[]" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704134 5109 flags.go:64] FLAG: --cluster-domain="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704296 5109 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704397 5109 flags.go:64] FLAG: --config-dir="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704510 5109 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704668 5109 flags.go:64] FLAG: --container-log-max-files="5" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704834 5109 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.704970 5109 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705099 5109 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705227 5109 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705352 5109 flags.go:64] FLAG: --contention-profiling="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705482 5109 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705597 5109 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705765 5109 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.705879 5109 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706021 5109 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706154 5109 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706280 5109 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706422 5109 flags.go:64] FLAG: --enable-load-reader="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706566 5109 flags.go:64] FLAG: --enable-server="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706743 5109 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.706876 5109 flags.go:64] FLAG: --event-burst="100" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707021 5109 flags.go:64] FLAG: --event-qps="50" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707138 5109 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707259 5109 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707390 5109 flags.go:64] FLAG: --eviction-hard="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707521 5109 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707685 5109 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707835 5109 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.707951 5109 flags.go:64] FLAG: --eviction-soft="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708046 5109 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708175 5109 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708313 5109 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708410 5109 flags.go:64] FLAG: --experimental-mounter-path="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708501 5109 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708595 5109 flags.go:64] FLAG: --fail-swap-on="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708782 5109 flags.go:64] FLAG: --feature-gates="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.708914 5109 flags.go:64] FLAG: --file-check-frequency="20s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709011 5109 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709103 5109 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709193 5109 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709295 5109 flags.go:64] FLAG: --healthz-port="10248" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709397 5109 flags.go:64] FLAG: --help="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709499 5109 flags.go:64] FLAG: --hostname-override="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709620 5109 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709768 5109 flags.go:64] FLAG: --http-check-frequency="20s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709867 5109 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.709978 5109 flags.go:64] FLAG: --image-credential-provider-config="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710084 5109 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710176 5109 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710277 5109 flags.go:64] FLAG: --image-service-endpoint="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710387 5109 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710486 5109 flags.go:64] FLAG: --kube-api-burst="100" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710593 5109 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710766 5109 flags.go:64] FLAG: --kube-api-qps="50" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710870 5109 flags.go:64] FLAG: --kube-reserved="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.710973 5109 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711074 5109 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711180 5109 flags.go:64] FLAG: --kubelet-cgroups="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711289 5109 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711384 5109 flags.go:64] FLAG: --lock-file="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711475 5109 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711578 5109 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711770 5109 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.711913 5109 flags.go:64] FLAG: --log-json-split-stream="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712031 5109 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712128 5109 flags.go:64] FLAG: --log-text-split-stream="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712257 5109 flags.go:64] FLAG: --logging-format="text" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712275 5109 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712284 5109 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712293 5109 flags.go:64] FLAG: --manifest-url="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712301 5109 flags.go:64] FLAG: --manifest-url-header="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712315 5109 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712324 5109 flags.go:64] FLAG: --max-open-files="1000000" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712334 5109 flags.go:64] FLAG: --max-pods="110" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712343 5109 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712351 5109 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712359 5109 flags.go:64] FLAG: --memory-manager-policy="None" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712367 5109 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712376 5109 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712384 5109 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712393 5109 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712413 5109 flags.go:64] FLAG: --node-status-max-images="50" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712421 5109 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712429 5109 flags.go:64] FLAG: --oom-score-adj="-999" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712440 5109 flags.go:64] FLAG: --pod-cidr="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712449 5109 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712461 5109 flags.go:64] FLAG: --pod-manifest-path="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712469 5109 flags.go:64] FLAG: --pod-max-pids="-1" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712477 5109 flags.go:64] FLAG: --pods-per-core="0" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712485 5109 flags.go:64] FLAG: --port="10250" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712494 5109 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712503 5109 flags.go:64] FLAG: --provider-id="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712511 5109 flags.go:64] FLAG: --qos-reserved="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712519 5109 flags.go:64] FLAG: --read-only-port="10255" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712527 5109 flags.go:64] FLAG: --register-node="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712535 5109 flags.go:64] FLAG: --register-schedulable="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712543 5109 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712557 5109 flags.go:64] FLAG: --registry-burst="10" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712565 5109 flags.go:64] FLAG: --registry-qps="5" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712575 5109 flags.go:64] FLAG: --reserved-cpus="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712587 5109 flags.go:64] FLAG: --reserved-memory="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712599 5109 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712610 5109 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712622 5109 flags.go:64] FLAG: --rotate-certificates="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712711 5109 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712728 5109 flags.go:64] FLAG: --runonce="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712739 5109 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712749 5109 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712759 5109 flags.go:64] FLAG: --seccomp-default="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712768 5109 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712779 5109 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712789 5109 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712799 5109 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712809 5109 flags.go:64] FLAG: --storage-driver-password="root" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712819 5109 flags.go:64] FLAG: --storage-driver-secure="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712829 5109 flags.go:64] FLAG: --storage-driver-table="stats" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712838 5109 flags.go:64] FLAG: --storage-driver-user="root" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712851 5109 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712861 5109 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712871 5109 flags.go:64] FLAG: --system-cgroups="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712880 5109 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712898 5109 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712907 5109 flags.go:64] FLAG: --tls-cert-file="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712916 5109 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712931 5109 flags.go:64] FLAG: --tls-min-version="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712941 5109 flags.go:64] FLAG: --tls-private-key-file="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712951 5109 flags.go:64] FLAG: --topology-manager-policy="none" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712960 5109 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712969 5109 flags.go:64] FLAG: --topology-manager-scope="container" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712981 5109 flags.go:64] FLAG: --v="2" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.712993 5109 flags.go:64] FLAG: --version="false" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.713007 5109 flags.go:64] FLAG: --vmodule="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.713021 5109 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.713031 5109 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713305 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713320 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713330 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713340 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713348 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713357 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713365 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713374 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713382 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713391 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713400 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713408 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713416 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713425 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713433 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713441 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713465 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713473 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713483 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713493 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713502 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713514 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713526 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713535 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713544 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713552 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713561 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713569 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713577 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713587 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713597 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713606 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713614 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713623 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713664 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713674 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713683 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713692 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713701 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713709 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713718 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713728 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713736 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713746 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713755 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713764 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713773 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713782 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713790 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713815 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713824 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713832 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713840 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713850 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713858 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713866 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713880 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713892 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713902 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713912 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713922 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713932 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713941 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.713950 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714000 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714010 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714018 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714027 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714035 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714044 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714053 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714063 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714071 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714079 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714088 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714096 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714105 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714113 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714122 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714130 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714138 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714148 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714181 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714190 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714201 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.714209 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.715375 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.731105 5109 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.731161 5109 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731253 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731262 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731268 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731274 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731288 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731299 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731304 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731309 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731314 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731320 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731325 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731331 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731336 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731341 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731346 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731352 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731358 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731364 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731369 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731375 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731381 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731386 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731392 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731397 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731403 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731408 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731414 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731419 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731424 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731430 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731436 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731445 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731451 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731456 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731461 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731467 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731472 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731477 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731483 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731488 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731496 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731503 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731510 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731515 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731521 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731527 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731532 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731538 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731543 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731549 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731554 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731560 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731565 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731570 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731575 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731581 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731586 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731591 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731596 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731602 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731609 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731615 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731622 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731629 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731661 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731668 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731673 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731678 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731684 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731689 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731694 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731700 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731705 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731711 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731716 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731722 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731727 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731733 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731738 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731743 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731749 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731754 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731759 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731765 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731770 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731776 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.731785 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.731990 5109 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732002 5109 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732008 5109 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732013 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732019 5109 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732025 5109 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732031 5109 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732037 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732042 5109 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732048 5109 feature_gate.go:328] unrecognized feature gate: Example2 Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732054 5109 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732059 5109 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732065 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732070 5109 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732075 5109 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732081 5109 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732087 5109 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732093 5109 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732098 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732103 5109 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732112 5109 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732121 5109 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732127 5109 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732133 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732139 5109 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732149 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732155 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732161 5109 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732166 5109 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732172 5109 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732264 5109 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732270 5109 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732276 5109 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732282 5109 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732287 5109 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732293 5109 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732298 5109 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732303 5109 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732309 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732315 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732320 5109 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732326 5109 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732332 5109 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732346 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732351 5109 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732359 5109 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732365 5109 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732371 5109 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732376 5109 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732382 5109 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732394 5109 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732400 5109 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732405 5109 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732411 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732421 5109 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732427 5109 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732432 5109 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732437 5109 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732443 5109 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732449 5109 feature_gate.go:328] unrecognized feature gate: Example Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732454 5109 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732459 5109 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732464 5109 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732470 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732476 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732482 5109 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732487 5109 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732492 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732497 5109 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732503 5109 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732508 5109 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732514 5109 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732520 5109 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732525 5109 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732531 5109 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732536 5109 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732549 5109 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732555 5109 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732565 5109 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732570 5109 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732576 5109 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732581 5109 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732587 5109 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732592 5109 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732597 5109 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 19 00:09:30 crc kubenswrapper[5109]: W0219 00:09:30.732604 5109 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.732616 5109 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.734616 5109 server.go:962] "Client rotation is on, will bootstrap in background" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.738611 5109 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.742245 5109 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.742342 5109 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.743483 5109 server.go:1019] "Starting client certificate rotation" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.743595 5109 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.744522 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.770612 5109 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.772910 5109 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.773351 5109 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.789412 5109 log.go:25] "Validated CRI v1 runtime API" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.836963 5109 log.go:25] "Validated CRI v1 image API" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.839325 5109 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.843990 5109 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-19-00-02-59-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.844035 5109 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.858439 5109 manager.go:217] Machine: {Timestamp:2026-02-19 00:09:30.855813788 +0000 UTC m=+0.692053797 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:6cf93e6e-89e8-4c26-9599-93db5625187a BootID:e671bad5-2a36-4927-b785-4272497c90ae Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:3e:5e:40 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:3e:5e:40 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ad:36:57 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:dc:0d:f9 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:12:d1:74 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b8:8b:2e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:d2:24:c2:60:5e:31 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:86:5f:6e:81:d6:82 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.859271 5109 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.859428 5109 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.862373 5109 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.862408 5109 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.862588 5109 topology_manager.go:138] "Creating topology manager with none policy" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.862599 5109 container_manager_linux.go:306] "Creating device plugin manager" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.862622 5109 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.863568 5109 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.864440 5109 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.864594 5109 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.868266 5109 kubelet.go:491] "Attempting to sync node with API server" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.868290 5109 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.868306 5109 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.868321 5109 kubelet.go:397] "Adding apiserver pod source" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.868337 5109 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.871040 5109 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.871058 5109 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.874164 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.874303 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.876332 5109 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.876372 5109 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.880895 5109 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.881112 5109 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.881886 5109 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882779 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882810 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882822 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882833 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882844 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882855 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882866 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882877 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882890 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882908 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.882923 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.883336 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.884811 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.884853 5109 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.886649 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.196:6443: connect: connection refused Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.908513 5109 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.908599 5109 server.go:1295] "Started kubelet" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.908908 5109 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.908915 5109 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.909541 5109 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.909931 5109 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 19 00:09:30 crc systemd[1]: Started Kubernetes Kubelet. Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.911132 5109 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.911163 5109 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.912085 5109 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.912109 5109 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.912234 5109 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.912261 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="200ms" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.912323 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.913893 5109 factory.go:55] Registering systemd factory Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.913959 5109 factory.go:223] Registration of the systemd container factory successfully Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.914583 5109 factory.go:153] Registering CRI-O factory Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.914629 5109 factory.go:223] Registration of the crio container factory successfully Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.914756 5109 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.914786 5109 factory.go:103] Registering Raw factory Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.914810 5109 manager.go:1196] Started watching for new ooms in manager Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.915451 5109 server.go:317] "Adding debug handlers to kubelet server" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.913989 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d46aae613bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,LastTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.915736 5109 manager.go:319] Starting recovery of all containers Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.916375 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.949320 5109 manager.go:324] Recovery completed Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.965488 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.967710 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.967765 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.967784 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.973065 5109 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.973089 5109 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.973114 5109 state_mem.go:36] "Initialized new in-memory state store" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978884 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978929 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978940 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978948 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978963 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978971 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978980 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.978990 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979003 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979012 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979020 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979028 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979039 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979047 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979057 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979064 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979078 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979086 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979097 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979104 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979115 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979123 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979131 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979139 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979147 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979155 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979162 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979170 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979186 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979193 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979210 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979218 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979232 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979242 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979250 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979258 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979271 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979279 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979290 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979298 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979316 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979324 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979335 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979342 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979358 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979365 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979373 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979380 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979391 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979404 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979412 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979419 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979434 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979442 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979449 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979456 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979474 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979484 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979492 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979500 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979510 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979517 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979525 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979534 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979542 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979550 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979564 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979571 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979588 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979596 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979604 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979612 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979628 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979650 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979662 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979670 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979681 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979689 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979697 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979705 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979712 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979721 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979728 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979736 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979750 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979757 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979766 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979774 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979781 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979789 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979798 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979806 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979819 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979828 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979836 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979843 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979851 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979858 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979866 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979875 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979892 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979899 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979907 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979916 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979926 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979934 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979951 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979959 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979970 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979978 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.979991 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980000 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980021 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980029 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980036 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980044 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980056 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980063 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980071 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980079 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980095 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980106 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980113 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980121 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980140 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980148 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980160 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980167 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980181 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980189 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980196 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980216 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980230 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980238 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980248 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980267 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980278 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980286 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980307 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980314 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980325 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980343 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980351 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980359 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980385 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980395 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980402 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980421 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980434 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980443 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980452 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980461 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980469 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980479 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980487 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980494 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980509 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980516 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980524 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980532 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980540 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980548 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980559 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980593 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980606 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980614 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980621 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980628 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980720 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980728 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980740 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980748 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980758 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980770 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980779 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980786 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980793 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980803 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980813 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980820 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980832 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980839 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980859 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980867 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980882 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980892 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980900 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980907 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980916 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980924 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980931 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980944 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980952 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980959 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980969 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980980 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.980995 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981002 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981009 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981016 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981026 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981040 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981050 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981057 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981066 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981074 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981081 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981088 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981101 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981109 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981118 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981125 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981165 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981173 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981180 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981187 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981197 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981207 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981214 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.981224 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982652 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982683 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982697 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982710 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982762 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982774 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982785 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982796 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982808 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982820 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982831 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.982842 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.985342 5109 policy_none.go:49] "None policy: Start" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.985382 5109 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.985405 5109 state_mem.go:35] "Initializing new in-memory state store" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.987767 5109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988062 5109 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988098 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988113 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988127 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988140 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988152 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988163 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988175 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988192 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988208 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988232 5109 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988245 5109 reconstruct.go:97] "Volume reconstruction finished" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.988253 5109 reconciler.go:26] "Reconciler: start to sync state" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.989876 5109 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.989915 5109 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.989938 5109 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 19 00:09:30 crc kubenswrapper[5109]: I0219 00:09:30.989947 5109 kubelet.go:2451] "Starting kubelet main sync loop" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.989988 5109 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 19 00:09:30 crc kubenswrapper[5109]: E0219 00:09:30.990880 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.012434 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.052748 5109 manager.go:341] "Starting Device Plugin manager" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.052985 5109 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053008 5109 server.go:85] "Starting device plugin registration server" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053473 5109 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053493 5109 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053644 5109 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053740 5109 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.053752 5109 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.059944 5109 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.059995 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.090104 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.090281 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.091123 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.091174 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.091185 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.091944 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.092192 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.092266 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.092611 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.092662 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.092673 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093128 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093150 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093158 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093455 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093548 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.093584 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094172 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094205 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094225 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094223 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.094261 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095203 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095314 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095370 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095780 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095822 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.095837 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096428 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096470 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096598 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096689 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.096750 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097176 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097231 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097251 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097842 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097880 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.097896 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.098449 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.098513 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.099224 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.099276 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.099288 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.113211 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="400ms" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.121665 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.140013 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.154106 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.154992 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.155057 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.155077 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.155118 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.155822 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.196:6443: connect: connection refused" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.163660 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.191023 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.191064 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.191572 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192771 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192815 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192845 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192911 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192936 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.192965 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.192984 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193033 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193060 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193121 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193253 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193292 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193312 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193327 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193350 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193367 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193364 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193382 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193397 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193410 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193426 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193445 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193734 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193870 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.193966 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.194171 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.194336 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.194861 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.199789 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294459 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294557 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294591 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294681 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294746 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294753 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294680 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294848 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294813 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294790 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294864 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294944 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294987 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.294992 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295025 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295073 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295117 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295117 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295182 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295140 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295200 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295254 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295267 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295285 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295320 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295347 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295350 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295396 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295396 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295444 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295451 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.295598 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.356066 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.357622 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.357711 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.357734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.357773 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.358493 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.196:6443: connect: connection refused" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.422777 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.441562 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: W0219 00:09:31.460362 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-7dbf31a79b8ead071e708610470a7052487baeab206ec7afc2d42dc963d39fe3 WatchSource:0}: Error finding container 7dbf31a79b8ead071e708610470a7052487baeab206ec7afc2d42dc963d39fe3: Status 404 returned error can't find the container with id 7dbf31a79b8ead071e708610470a7052487baeab206ec7afc2d42dc963d39fe3 Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.464139 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.467979 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:09:31 crc kubenswrapper[5109]: W0219 00:09:31.469129 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-3b24ef9fb312e40b6527b4bedfa06d16aaa91fd7ad6f3bc1b33cc6bb7fca7857 WatchSource:0}: Error finding container 3b24ef9fb312e40b6527b4bedfa06d16aaa91fd7ad6f3bc1b33cc6bb7fca7857: Status 404 returned error can't find the container with id 3b24ef9fb312e40b6527b4bedfa06d16aaa91fd7ad6f3bc1b33cc6bb7fca7857 Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.494603 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.501036 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:31 crc kubenswrapper[5109]: W0219 00:09:31.505112 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-8f81a22d7461a3a9d40a1daa0ffd114eb5fd222d4a7ae9a7a855a6beaccb1ce5 WatchSource:0}: Error finding container 8f81a22d7461a3a9d40a1daa0ffd114eb5fd222d4a7ae9a7a855a6beaccb1ce5: Status 404 returned error can't find the container with id 8f81a22d7461a3a9d40a1daa0ffd114eb5fd222d4a7ae9a7a855a6beaccb1ce5 Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.514010 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="800ms" Feb 19 00:09:31 crc kubenswrapper[5109]: W0219 00:09:31.538568 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-ecfa4c083363587656b7a38f1a579cc927a10d20d28822fc9a5dc33ea3bd5b9a WatchSource:0}: Error finding container ecfa4c083363587656b7a38f1a579cc927a10d20d28822fc9a5dc33ea3bd5b9a: Status 404 returned error can't find the container with id ecfa4c083363587656b7a38f1a579cc927a10d20d28822fc9a5dc33ea3bd5b9a Feb 19 00:09:31 crc kubenswrapper[5109]: W0219 00:09:31.540280 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-434c38553591c564d20662e2e43f39edce4a4d9d45453c283cdae77d93f3fb19 WatchSource:0}: Error finding container 434c38553591c564d20662e2e43f39edce4a4d9d45453c283cdae77d93f3fb19: Status 404 returned error can't find the container with id 434c38553591c564d20662e2e43f39edce4a4d9d45453c283cdae77d93f3fb19 Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.759594 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.760600 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.760651 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.760665 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.760689 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.761087 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.196:6443: connect: connection refused" node="crc" Feb 19 00:09:31 crc kubenswrapper[5109]: E0219 00:09:31.882437 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:31 crc kubenswrapper[5109]: I0219 00:09:31.888065 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.196:6443: connect: connection refused Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.003474 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"434c38553591c564d20662e2e43f39edce4a4d9d45453c283cdae77d93f3fb19"} Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.006237 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"ecfa4c083363587656b7a38f1a579cc927a10d20d28822fc9a5dc33ea3bd5b9a"} Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.008710 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"8f81a22d7461a3a9d40a1daa0ffd114eb5fd222d4a7ae9a7a855a6beaccb1ce5"} Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.014671 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"3b24ef9fb312e40b6527b4bedfa06d16aaa91fd7ad6f3bc1b33cc6bb7fca7857"} Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.016041 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"7dbf31a79b8ead071e708610470a7052487baeab206ec7afc2d42dc963d39fe3"} Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.121942 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.155924 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.250083 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.196:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18957d46aae613bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,LastTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.315487 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="1.6s" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.406048 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.562085 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.563548 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.563589 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.563599 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.563623 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.564100 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.196:6443: connect: connection refused" node="crc" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.819488 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:32 crc kubenswrapper[5109]: E0219 00:09:32.820549 5109 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 19 00:09:32 crc kubenswrapper[5109]: I0219 00:09:32.887913 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.196:6443: connect: connection refused Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.026300 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.026376 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.026398 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.028763 5109 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b" exitCode=0 Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.028910 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.029163 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.030094 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.030137 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.030147 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.030363 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.031854 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445" exitCode=0 Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.031988 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.032559 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.032707 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.032758 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.032780 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.033119 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.036536 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038430 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823" exitCode=0 Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038553 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038835 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038878 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038941 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.038966 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.041089 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.041125 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.041139 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.041398 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.043724 5109 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b" exitCode=0 Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.043777 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b"} Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.043798 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.044211 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.044243 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.044257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.044389 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:33 crc kubenswrapper[5109]: I0219 00:09:33.889232 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.196:6443: connect: connection refused Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.916839 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="3.2s" Feb 19 00:09:33 crc kubenswrapper[5109]: E0219 00:09:33.919374 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.196:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.047907 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.048050 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.049676 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.049724 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.049737 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5109]: E0219 00:09:34.049974 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.050948 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.050988 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.050998 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.051126 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.051624 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.051697 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.051706 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5109]: E0219 00:09:34.051854 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.054225 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.054245 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.054253 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.054261 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.055500 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08" exitCode=0 Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.055541 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.055702 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.056239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.056260 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.056268 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5109]: E0219 00:09:34.056415 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.058371 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676"} Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.058491 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.059093 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.059120 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.059131 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5109]: E0219 00:09:34.059262 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.164622 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.166441 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.166487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.166500 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:34 crc kubenswrapper[5109]: I0219 00:09:34.166526 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:34 crc kubenswrapper[5109]: E0219 00:09:34.167214 5109 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.196:6443: connect: connection refused" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.064061 5109 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e" exitCode=0 Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.064179 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e"} Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.064395 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.065367 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.065414 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.065433 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:35 crc kubenswrapper[5109]: E0219 00:09:35.065769 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.070860 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"29aff849c549a07c658910126fc5216e83ea186c514923d1902e077ef942af20"} Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.071254 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.071305 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.071356 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.071503 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.071600 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.072454 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.072483 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.072496 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:35 crc kubenswrapper[5109]: E0219 00:09:35.072853 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073227 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073254 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073266 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:35 crc kubenswrapper[5109]: E0219 00:09:35.073425 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073920 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.073959 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:35 crc kubenswrapper[5109]: E0219 00:09:35.074159 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.074406 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.074430 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.074441 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:35 crc kubenswrapper[5109]: E0219 00:09:35.074789 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.889973 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:35 crc kubenswrapper[5109]: I0219 00:09:35.972976 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.078833 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd"} Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.078888 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802"} Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.078908 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d"} Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.078997 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.079071 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.079143 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.079867 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.079912 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.079932 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.080386 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.080465 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.080494 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:36 crc kubenswrapper[5109]: E0219 00:09:36.080387 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:36 crc kubenswrapper[5109]: E0219 00:09:36.081182 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:36 crc kubenswrapper[5109]: I0219 00:09:36.899142 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.087815 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105"} Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.087877 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09"} Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.087986 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.088052 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.088627 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.088889 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.088928 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.088949 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:37 crc kubenswrapper[5109]: E0219 00:09:37.089664 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.089717 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.089785 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.089815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:37 crc kubenswrapper[5109]: E0219 00:09:37.090191 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.273026 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.273265 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.274220 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.274283 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.274304 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:37 crc kubenswrapper[5109]: E0219 00:09:37.274940 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.367493 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.368624 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.368732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.368781 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.368813 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.462514 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:37 crc kubenswrapper[5109]: I0219 00:09:37.470914 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.090274 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.090522 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091407 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091467 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091492 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091782 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091832 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:38 crc kubenswrapper[5109]: I0219 00:09:38.091852 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:38 crc kubenswrapper[5109]: E0219 00:09:38.092238 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:38 crc kubenswrapper[5109]: E0219 00:09:38.092943 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.048712 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.049065 5109 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.049135 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.050479 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.050533 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.050561 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:39 crc kubenswrapper[5109]: E0219 00:09:39.051149 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.092520 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.093446 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.093503 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.093529 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:39 crc kubenswrapper[5109]: E0219 00:09:39.094043 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.253860 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.254170 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.255259 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.255315 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.255334 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:39 crc kubenswrapper[5109]: E0219 00:09:39.255887 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.705971 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:39 crc kubenswrapper[5109]: I0219 00:09:39.928683 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.095457 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.096439 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.096517 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.096544 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:40 crc kubenswrapper[5109]: E0219 00:09:40.097199 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.475787 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.476243 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.477249 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.477321 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:40 crc kubenswrapper[5109]: I0219 00:09:40.477346 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:40 crc kubenswrapper[5109]: E0219 00:09:40.478062 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:41 crc kubenswrapper[5109]: E0219 00:09:41.060312 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:41 crc kubenswrapper[5109]: I0219 00:09:41.097614 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:41 crc kubenswrapper[5109]: I0219 00:09:41.098732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:41 crc kubenswrapper[5109]: I0219 00:09:41.098814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:41 crc kubenswrapper[5109]: I0219 00:09:41.098839 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:41 crc kubenswrapper[5109]: E0219 00:09:41.099377 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.706605 5109 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.706763 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.822457 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.822791 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.823975 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.824036 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:42 crc kubenswrapper[5109]: I0219 00:09:42.824061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:42 crc kubenswrapper[5109]: E0219 00:09:42.824754 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:44 crc kubenswrapper[5109]: I0219 00:09:44.869599 5109 trace.go:236] Trace[1610089452]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:34.867) (total time: 10001ms): Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1610089452]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:44.869) Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1610089452]: [10.001677714s] [10.001677714s] END Feb 19 00:09:44 crc kubenswrapper[5109]: E0219 00:09:44.869653 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:44 crc kubenswrapper[5109]: I0219 00:09:44.888574 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 19 00:09:44 crc kubenswrapper[5109]: I0219 00:09:44.919077 5109 trace.go:236] Trace[1040592410]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:34.918) (total time: 10000ms): Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1040592410]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (00:09:44.919) Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1040592410]: [10.000937114s] [10.000937114s] END Feb 19 00:09:44 crc kubenswrapper[5109]: E0219 00:09:44.919113 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:44 crc kubenswrapper[5109]: I0219 00:09:44.922808 5109 trace.go:236] Trace[1175579734]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:34.921) (total time: 10001ms): Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1175579734]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:44.922) Feb 19 00:09:44 crc kubenswrapper[5109]: Trace[1175579734]: [10.001574657s] [10.001574657s] END Feb 19 00:09:44 crc kubenswrapper[5109]: E0219 00:09:44.922850 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:45 crc kubenswrapper[5109]: I0219 00:09:45.547359 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:45 crc kubenswrapper[5109]: I0219 00:09:45.547428 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:09:45 crc kubenswrapper[5109]: I0219 00:09:45.553133 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:45 crc kubenswrapper[5109]: I0219 00:09:45.553205 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 19 00:09:47 crc kubenswrapper[5109]: E0219 00:09:47.118116 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 19 00:09:48 crc kubenswrapper[5109]: E0219 00:09:48.578846 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:48 crc kubenswrapper[5109]: E0219 00:09:48.733406 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.058260 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.058784 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.060062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.060142 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.060171 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:49 crc kubenswrapper[5109]: E0219 00:09:49.060855 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.066795 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.117215 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.118153 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.118282 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:49 crc kubenswrapper[5109]: I0219 00:09:49.118308 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:49 crc kubenswrapper[5109]: E0219 00:09:49.119174 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.553154 5109 trace.go:236] Trace[833771240]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (19-Feb-2026 00:09:39.066) (total time: 11486ms): Feb 19 00:09:50 crc kubenswrapper[5109]: Trace[833771240]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 11486ms (00:09:50.553) Feb 19 00:09:50 crc kubenswrapper[5109]: Trace[833771240]: [11.486402743s] [11.486402743s] END Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.553194 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.553112 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46aae613bf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,LastTimestamp:2026-02-19 00:09:30.908545983 +0000 UTC m=+0.744785992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.554477 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.559068 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.564370 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.569808 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.569933 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.576519 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46b3a23ee9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.055095529 +0000 UTC m=+0.891335528,LastTimestamp:2026-02-19 00:09:31.055095529 +0000 UTC m=+0.891335528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.585156 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.091144486 +0000 UTC m=+0.927384475,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.592768 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.091180387 +0000 UTC m=+0.927420376,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.601170 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.091190247 +0000 UTC m=+0.927430236,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.608182 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.09264426 +0000 UTC m=+0.928884249,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.615846 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.09266873 +0000 UTC m=+0.928908719,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.622423 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.092678421 +0000 UTC m=+0.928918410,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.627444 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.093140964 +0000 UTC m=+0.929380953,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.634813 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.093154575 +0000 UTC m=+0.929394564,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.641944 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.093162365 +0000 UTC m=+0.929402354,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.651776 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.094193125 +0000 UTC m=+0.930433134,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.659335 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.094207445 +0000 UTC m=+0.930447474,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.665760 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.094214216 +0000 UTC m=+0.930454225,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.670211 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.094235356 +0000 UTC m=+0.930475355,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.673889 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.094252517 +0000 UTC m=+0.930492546,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.678846 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.094270307 +0000 UTC m=+0.930510336,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.681463 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.095805432 +0000 UTC m=+0.932045431,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.684823 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.095831643 +0000 UTC m=+0.932071642,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.690119 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6e150c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6e150c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96779086 +0000 UTC m=+0.804030859,LastTimestamp:2026-02-19 00:09:31.095843093 +0000 UTC m=+0.932083092,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.693588 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35884->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.693658 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35884->192.168.126.11:17697: read: connection reset by peer" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.693694 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35900->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.693857 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35900->192.168.126.11:17697: read: connection reset by peer" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.694375 5109 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.694444 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.697989 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6d5bcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6d5bcf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.967743439 +0000 UTC m=+0.803983438,LastTimestamp:2026-02-19 00:09:31.096451071 +0000 UTC m=+0.932691080,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.703826 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18957d46ae6dd0ce\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18957d46ae6dd0ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:30.96777339 +0000 UTC m=+0.804013399,LastTimestamp:2026-02-19 00:09:31.096479962 +0000 UTC m=+0.932719961,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.710301 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d46cc436fd1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.468312529 +0000 UTC m=+1.304552528,LastTimestamp:2026-02-19 00:09:31.468312529 +0000 UTC m=+1.304552528,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.714580 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d46cd035634 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.480888884 +0000 UTC m=+1.317128904,LastTimestamp:2026-02-19 00:09:31.480888884 +0000 UTC m=+1.317128904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.722509 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d46cee09ee2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.512168162 +0000 UTC m=+1.348408161,LastTimestamp:2026-02-19 00:09:31.512168162 +0000 UTC m=+1.348408161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.733089 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d46d0afad11 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.542514961 +0000 UTC m=+1.378754990,LastTimestamp:2026-02-19 00:09:31.542514961 +0000 UTC m=+1.378754990,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.740358 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d46d0bfff5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:31.543584604 +0000 UTC m=+1.379824643,LastTimestamp:2026-02-19 00:09:31.543584604 +0000 UTC m=+1.379824643,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.748162 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d46f69096ff openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.178011903 +0000 UTC m=+2.014251902,LastTimestamp:2026-02-19 00:09:32.178011903 +0000 UTC m=+2.014251902,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.753926 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d46f690bc93 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.178021523 +0000 UTC m=+2.014261552,LastTimestamp:2026-02-19 00:09:32.178021523 +0000 UTC m=+2.014261552,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.759932 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d46f69260fb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.178129147 +0000 UTC m=+2.014369146,LastTimestamp:2026-02-19 00:09:32.178129147 +0000 UTC m=+2.014369146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.764290 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d46f6a736da openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.179494618 +0000 UTC m=+2.015734647,LastTimestamp:2026-02-19 00:09:32.179494618 +0000 UTC m=+2.015734647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.771542 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d46f6a7bc0f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.179528719 +0000 UTC m=+2.015768728,LastTimestamp:2026-02-19 00:09:32.179528719 +0000 UTC m=+2.015768728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.776821 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.777055 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.778467 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.778513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.778528 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.778913 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.779505 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d46f79f0b6a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.195736426 +0000 UTC m=+2.031976425,LastTimestamp:2026-02-19 00:09:32.195736426 +0000 UTC m=+2.031976425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.782250 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.784501 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d46f7b0c229 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.196897321 +0000 UTC m=+2.033137320,LastTimestamp:2026-02-19 00:09:32.196897321 +0000 UTC m=+2.033137320,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.785112 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.788861 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d46f7b4be81 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.197158529 +0000 UTC m=+2.033398518,LastTimestamp:2026-02-19 00:09:32.197158529 +0000 UTC m=+2.033398518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.792563 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d46f7b8d2cb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.197425867 +0000 UTC m=+2.033665866,LastTimestamp:2026-02-19 00:09:32.197425867 +0000 UTC m=+2.033665866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.800583 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d46f7c23270 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.198040176 +0000 UTC m=+2.034280175,LastTimestamp:2026-02-19 00:09:32.198040176 +0000 UTC m=+2.034280175,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.804417 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d46f7c2a3a5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.198069157 +0000 UTC m=+2.034309146,LastTimestamp:2026-02-19 00:09:32.198069157 +0000 UTC m=+2.034309146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.809410 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d47099eb453 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.497704019 +0000 UTC m=+2.333944038,LastTimestamp:2026-02-19 00:09:32.497704019 +0000 UTC m=+2.333944038,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.813379 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d470a67e970 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.510890352 +0000 UTC m=+2.347130391,LastTimestamp:2026-02-19 00:09:32.510890352 +0000 UTC m=+2.347130391,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.817890 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d470a7aa70c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.51211854 +0000 UTC m=+2.348358529,LastTimestamp:2026-02-19 00:09:32.51211854 +0000 UTC m=+2.348358529,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.823661 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d47265b4130 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.979822896 +0000 UTC m=+2.816062885,LastTimestamp:2026-02-19 00:09:32.979822896 +0000 UTC m=+2.816062885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.829291 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d472740aacc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.994857676 +0000 UTC m=+2.831097685,LastTimestamp:2026-02-19 00:09:32.994857676 +0000 UTC m=+2.831097685,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.834381 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d47275130a7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:32.995940519 +0000 UTC m=+2.832180508,LastTimestamp:2026-02-19 00:09:32.995940519 +0000 UTC m=+2.832180508,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.841987 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d47296e9e9d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.031423645 +0000 UTC m=+2.867663634,LastTimestamp:2026-02-19 00:09:33.031423645 +0000 UTC m=+2.867663634,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.845504 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4729ba5ba5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.036387237 +0000 UTC m=+2.872627226,LastTimestamp:2026-02-19 00:09:33.036387237 +0000 UTC m=+2.872627226,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.849305 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d472a252a7a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.043387002 +0000 UTC m=+2.879626991,LastTimestamp:2026-02-19 00:09:33.043387002 +0000 UTC m=+2.879626991,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.854282 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d472a3de839 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.045008441 +0000 UTC m=+2.881248430,LastTimestamp:2026-02-19 00:09:33.045008441 +0000 UTC m=+2.881248430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.859055 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d4738842642 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.284492866 +0000 UTC m=+3.120732855,LastTimestamp:2026-02-19 00:09:33.284492866 +0000 UTC m=+3.120732855,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.864072 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d473893fc28 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.285530664 +0000 UTC m=+3.121770653,LastTimestamp:2026-02-19 00:09:33.285530664 +0000 UTC m=+3.121770653,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.871334 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d47389edddb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.286243803 +0000 UTC m=+3.122483792,LastTimestamp:2026-02-19 00:09:33.286243803 +0000 UTC m=+3.122483792,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.878063 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4739de5f13 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.307182867 +0000 UTC m=+3.143422866,LastTimestamp:2026-02-19 00:09:33.307182867 +0000 UTC m=+3.143422866,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.884198 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d473a3308a1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.312731297 +0000 UTC m=+3.148971286,LastTimestamp:2026-02-19 00:09:33.312731297 +0000 UTC m=+3.148971286,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.888559 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d473a3c19e7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.313325543 +0000 UTC m=+3.149565532,LastTimestamp:2026-02-19 00:09:33.313325543 +0000 UTC m=+3.149565532,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: I0219 00:09:50.892441 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.892721 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d473a3d63f3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.313410035 +0000 UTC m=+3.149650024,LastTimestamp:2026-02-19 00:09:33.313410035 +0000 UTC m=+3.149650024,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.894717 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d473a459708 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.3139474 +0000 UTC m=+3.150187389,LastTimestamp:2026-02-19 00:09:33.3139474 +0000 UTC m=+3.150187389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.897609 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d473a4a3644 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.314250308 +0000 UTC m=+3.150490297,LastTimestamp:2026-02-19 00:09:33.314250308 +0000 UTC m=+3.150490297,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.899756 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d473a4f9011 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.314600977 +0000 UTC m=+3.150840966,LastTimestamp:2026-02-19 00:09:33.314600977 +0000 UTC m=+3.150840966,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.900146 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.904206 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18957d473c0e3bd6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.343874006 +0000 UTC m=+3.180113995,LastTimestamp:2026-02-19 00:09:33.343874006 +0000 UTC m=+3.180113995,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.905491 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d473c0efce4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.343923428 +0000 UTC m=+3.180163417,LastTimestamp:2026-02-19 00:09:33.343923428 +0000 UTC m=+3.180163417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.912318 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d47466c96ad openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.517829805 +0000 UTC m=+3.354069804,LastTimestamp:2026-02-19 00:09:33.517829805 +0000 UTC m=+3.354069804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.913405 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4746854d5f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.519449439 +0000 UTC m=+3.355689438,LastTimestamp:2026-02-19 00:09:33.519449439 +0000 UTC m=+3.355689438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.916239 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d47471a2c2a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.529205802 +0000 UTC m=+3.365445801,LastTimestamp:2026-02-19 00:09:33.529205802 +0000 UTC m=+3.365445801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.919906 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d47471a560a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.529216522 +0000 UTC m=+3.365456511,LastTimestamp:2026-02-19 00:09:33.529216522 +0000 UTC m=+3.365456511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.924526 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d474727a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.530088325 +0000 UTC m=+3.366328304,LastTimestamp:2026-02-19 00:09:33.530088325 +0000 UTC m=+3.366328304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.928106 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d47472822e8 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.530120936 +0000 UTC m=+3.366360925,LastTimestamp:2026-02-19 00:09:33.530120936 +0000 UTC m=+3.366360925,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.931929 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4753f95f10 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.745159952 +0000 UTC m=+3.581399941,LastTimestamp:2026-02-19 00:09:33.745159952 +0000 UTC m=+3.581399941,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.935671 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d47543abb1e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.749443358 +0000 UTC m=+3.585683337,LastTimestamp:2026-02-19 00:09:33.749443358 +0000 UTC m=+3.585683337,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.941426 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4754a6539c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.756494748 +0000 UTC m=+3.592734737,LastTimestamp:2026-02-19 00:09:33.756494748 +0000 UTC m=+3.592734737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.947684 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4754b46e73 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.757419123 +0000 UTC m=+3.593659112,LastTimestamp:2026-02-19 00:09:33.757419123 +0000 UTC m=+3.593659112,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.956880 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18957d4754f09a93 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.761362579 +0000 UTC m=+3.597602558,LastTimestamp:2026-02-19 00:09:33.761362579 +0000 UTC m=+3.597602558,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.963372 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4760e0f126 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.961662758 +0000 UTC m=+3.797902747,LastTimestamp:2026-02-19 00:09:33.961662758 +0000 UTC m=+3.797902747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.970082 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4761d5c93c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.97770886 +0000 UTC m=+3.813948869,LastTimestamp:2026-02-19 00:09:33.97770886 +0000 UTC m=+3.813948869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.979963 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4761e2bdcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.978557903 +0000 UTC m=+3.814797882,LastTimestamp:2026-02-19 00:09:33.978557903 +0000 UTC m=+3.814797882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.985946 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47669435df openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.057297375 +0000 UTC m=+3.893537364,LastTimestamp:2026-02-19 00:09:34.057297375 +0000 UTC m=+3.893537364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.992504 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476f0be702 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.199359234 +0000 UTC m=+4.035599243,LastTimestamp:2026-02-19 00:09:34.199359234 +0000 UTC m=+4.035599243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:50 crc kubenswrapper[5109]: E0219 00:09:50.999323 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476fdbfd82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.212996482 +0000 UTC m=+4.049236461,LastTimestamp:2026-02-19 00:09:34.212996482 +0000 UTC m=+4.049236461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.004125 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d477209a749 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.249543497 +0000 UTC m=+4.085783506,LastTimestamp:2026-02-19 00:09:34.249543497 +0000 UTC m=+4.085783506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.009258 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d4772a6951a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.259827994 +0000 UTC m=+4.096067983,LastTimestamp:2026-02-19 00:09:34.259827994 +0000 UTC m=+4.096067983,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.013800 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47a2d64d3d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.068261693 +0000 UTC m=+4.904501712,LastTimestamp:2026-02-19 00:09:35.068261693 +0000 UTC m=+4.904501712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.020078 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47b19ca67f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.316141695 +0000 UTC m=+5.152381694,LastTimestamp:2026-02-19 00:09:35.316141695 +0000 UTC m=+5.152381694,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.025574 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47b2349735 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.326099253 +0000 UTC m=+5.162339242,LastTimestamp:2026-02-19 00:09:35.326099253 +0000 UTC m=+5.162339242,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.030476 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47b244beb5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.327157941 +0000 UTC m=+5.163397940,LastTimestamp:2026-02-19 00:09:35.327157941 +0000 UTC m=+5.163397940,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.040246 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47bf3ca72c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.544731436 +0000 UTC m=+5.380971435,LastTimestamp:2026-02-19 00:09:35.544731436 +0000 UTC m=+5.380971435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.046778 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47c04ed026 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.56269879 +0000 UTC m=+5.398938819,LastTimestamp:2026-02-19 00:09:35.56269879 +0000 UTC m=+5.398938819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.052652 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47c06556e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.56417508 +0000 UTC m=+5.400415109,LastTimestamp:2026-02-19 00:09:35.56417508 +0000 UTC m=+5.400415109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.056808 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47ceb5ec83 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.804337283 +0000 UTC m=+5.640577272,LastTimestamp:2026-02-19 00:09:35.804337283 +0000 UTC m=+5.640577272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.060590 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.063045 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47cfe0aa0b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.823915531 +0000 UTC m=+5.660155510,LastTimestamp:2026-02-19 00:09:35.823915531 +0000 UTC m=+5.660155510,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.067809 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47cfed4597 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:35.824741783 +0000 UTC m=+5.660981812,LastTimestamp:2026-02-19 00:09:35.824741783 +0000 UTC m=+5.660981812,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.073332 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47df821e4d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:36.086154829 +0000 UTC m=+5.922394858,LastTimestamp:2026-02-19 00:09:36.086154829 +0000 UTC m=+5.922394858,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.079317 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47e099db85 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:36.104487813 +0000 UTC m=+5.940727842,LastTimestamp:2026-02-19 00:09:36.104487813 +0000 UTC m=+5.940727842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.083194 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47e0b8780f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:36.106493967 +0000 UTC m=+5.942733986,LastTimestamp:2026-02-19 00:09:36.106493967 +0000 UTC m=+5.942733986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.086078 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47ed711604 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:36.31991962 +0000 UTC m=+6.156159649,LastTimestamp:2026-02-19 00:09:36.31991962 +0000 UTC m=+6.156159649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.089758 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18957d47ee66c044 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:36.336019524 +0000 UTC m=+6.172259553,LastTimestamp:2026-02-19 00:09:36.336019524 +0000 UTC m=+6.172259553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.095316 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-controller-manager-crc.18957d496a200296 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Feb 19 00:09:51 crc kubenswrapper[5109]: body: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:42.706725526 +0000 UTC m=+12.542965545,LastTimestamp:2026-02-19 00:09:42.706725526 +0000 UTC m=+12.542965545,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.102086 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18957d496a21b03f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:42.706835519 +0000 UTC m=+12.543075538,LastTimestamp:2026-02-19 00:09:42.706835519 +0000 UTC m=+12.543075538,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.110858 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4a137157ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 19 00:09:51 crc kubenswrapper[5109]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:51 crc kubenswrapper[5109]: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:45.54740523 +0000 UTC m=+15.383645239,LastTimestamp:2026-02-19 00:09:45.54740523 +0000 UTC m=+15.383645239,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.115515 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4a137202b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:45.547449012 +0000 UTC m=+15.383689011,LastTimestamp:2026-02-19 00:09:45.547449012 +0000 UTC m=+15.383689011,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.122787 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124193 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="29aff849c549a07c658910126fc5216e83ea186c514923d1902e077ef942af20" exitCode=255 Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124232 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"29aff849c549a07c658910126fc5216e83ea186c514923d1902e077ef942af20"} Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124389 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124417 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124888 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124933 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.124969 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.125004 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.125017 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.125360 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.125522 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.125669 5109 scope.go:117] "RemoveContainer" containerID="29aff849c549a07c658910126fc5216e83ea186c514923d1902e077ef942af20" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.126149 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4a137157ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4a137157ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 19 00:09:51 crc kubenswrapper[5109]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 19 00:09:51 crc kubenswrapper[5109]: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:45.54740523 +0000 UTC m=+15.383645239,LastTimestamp:2026-02-19 00:09:45.553181655 +0000 UTC m=+15.389421654,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.132976 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4a137202b4\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4a137202b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:45.547449012 +0000 UTC m=+15.383689011,LastTimestamp:2026-02-19 00:09:45.553231037 +0000 UTC m=+15.389471036,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.139910 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4b462e6f59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:35884->192.168.126.11:17697: read: connection reset by peer Feb 19 00:09:51 crc kubenswrapper[5109]: body: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.693625689 +0000 UTC m=+20.529865678,LastTimestamp:2026-02-19 00:09:50.693625689 +0000 UTC m=+20.529865678,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.147690 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4b462f3bba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35884->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.69367801 +0000 UTC m=+20.529917999,LastTimestamp:2026-02-19 00:09:50.69367801 +0000 UTC m=+20.529917999,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.154699 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4b46308eaf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:35900->192.168.126.11:17697: read: connection reset by peer Feb 19 00:09:51 crc kubenswrapper[5109]: body: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.693764783 +0000 UTC m=+20.530004812,LastTimestamp:2026-02-19 00:09:50.693764783 +0000 UTC m=+20.530004812,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.161702 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4b4632ae4b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35900->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.693903947 +0000 UTC m=+20.530143996,LastTimestamp:2026-02-19 00:09:50.693903947 +0000 UTC m=+20.530143996,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.168728 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 19 00:09:51 crc kubenswrapper[5109]: &Event{ObjectMeta:{kube-apiserver-crc.18957d4b463a8b66 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 19 00:09:51 crc kubenswrapper[5109]: body: Feb 19 00:09:51 crc kubenswrapper[5109]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.694419302 +0000 UTC m=+20.530659331,LastTimestamp:2026-02-19 00:09:50.694419302 +0000 UTC m=+20.530659331,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 19 00:09:51 crc kubenswrapper[5109]: > Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.173930 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4b463b69b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:50.694476213 +0000 UTC m=+20.530716242,LastTimestamp:2026-02-19 00:09:50.694476213 +0000 UTC m=+20.530716242,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.179880 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4761e2bdcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4761e2bdcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.978557903 +0000 UTC m=+3.814797882,LastTimestamp:2026-02-19 00:09:51.126784176 +0000 UTC m=+20.963024165,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.351260 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d476f0be702\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476f0be702 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.199359234 +0000 UTC m=+4.035599243,LastTimestamp:2026-02-19 00:09:51.342339181 +0000 UTC m=+21.178579180,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: E0219 00:09:51.363166 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d476fdbfd82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476fdbfd82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.212996482 +0000 UTC m=+4.049236461,LastTimestamp:2026-02-19 00:09:51.356374901 +0000 UTC m=+21.192614900,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:51 crc kubenswrapper[5109]: I0219 00:09:51.894334 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.127943 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.129726 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.130037 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897"} Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.130149 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.130642 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.130666 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.130674 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:52 crc kubenswrapper[5109]: E0219 00:09:52.130906 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.131491 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.131506 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.131513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:52 crc kubenswrapper[5109]: E0219 00:09:52.131716 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.860054 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.860344 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.861384 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.861433 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.861445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:52 crc kubenswrapper[5109]: E0219 00:09:52.861881 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.873952 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 19 00:09:52 crc kubenswrapper[5109]: I0219 00:09:52.891529 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.133956 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.134581 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.136321 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" exitCode=255 Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.136365 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897"} Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.136445 5109 scope.go:117] "RemoveContainer" containerID="29aff849c549a07c658910126fc5216e83ea186c514923d1902e077ef942af20" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.136535 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.136693 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137206 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137259 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137270 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137396 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137436 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.137453 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:53 crc kubenswrapper[5109]: E0219 00:09:53.137822 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:53 crc kubenswrapper[5109]: E0219 00:09:53.138248 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.138657 5109 scope.go:117] "RemoveContainer" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" Feb 19 00:09:53 crc kubenswrapper[5109]: E0219 00:09:53.138961 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:53 crc kubenswrapper[5109]: E0219 00:09:53.153781 5109 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:53 crc kubenswrapper[5109]: E0219 00:09:53.524960 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:09:53 crc kubenswrapper[5109]: I0219 00:09:53.892497 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:54 crc kubenswrapper[5109]: I0219 00:09:54.141266 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:09:54 crc kubenswrapper[5109]: I0219 00:09:54.892670 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:55 crc kubenswrapper[5109]: I0219 00:09:55.895101 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.116307 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.116565 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.117360 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.117416 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.117435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.117909 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.118282 5109 scope.go:117] "RemoveContainer" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.118556 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.124310 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4bd7ee2875\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:09:56.118511945 +0000 UTC m=+25.954751944,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.253176 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.601924 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.895460 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.970124 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.971581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.971746 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.971769 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:09:56 crc kubenswrapper[5109]: I0219 00:09:56.971825 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:09:56 crc kubenswrapper[5109]: E0219 00:09:56.986834 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:09:57 crc kubenswrapper[5109]: I0219 00:09:57.896939 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:58 crc kubenswrapper[5109]: I0219 00:09:58.892529 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:09:59 crc kubenswrapper[5109]: E0219 00:09:59.471091 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:09:59 crc kubenswrapper[5109]: I0219 00:09:59.895351 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:00 crc kubenswrapper[5109]: E0219 00:10:00.533068 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:00 crc kubenswrapper[5109]: I0219 00:10:00.896351 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:01 crc kubenswrapper[5109]: E0219 00:10:01.060967 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:01 crc kubenswrapper[5109]: I0219 00:10:01.891732 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.130471 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.130874 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.131885 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.131946 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.131968 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:02 crc kubenswrapper[5109]: E0219 00:10:02.132565 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.133079 5109 scope.go:117] "RemoveContainer" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" Feb 19 00:10:02 crc kubenswrapper[5109]: E0219 00:10:02.133484 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:02 crc kubenswrapper[5109]: E0219 00:10:02.138210 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4bd7ee2875\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:10:02.133414106 +0000 UTC m=+31.969654125,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:02 crc kubenswrapper[5109]: E0219 00:10:02.426806 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:10:02 crc kubenswrapper[5109]: I0219 00:10:02.895894 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.895951 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.986965 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.988284 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.988373 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.988401 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:03 crc kubenswrapper[5109]: I0219 00:10:03.988448 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:04 crc kubenswrapper[5109]: E0219 00:10:04.003795 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:04 crc kubenswrapper[5109]: I0219 00:10:04.893826 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:05 crc kubenswrapper[5109]: I0219 00:10:05.895296 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:06 crc kubenswrapper[5109]: I0219 00:10:06.895341 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:07 crc kubenswrapper[5109]: E0219 00:10:07.543617 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:07 crc kubenswrapper[5109]: I0219 00:10:07.888521 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:08 crc kubenswrapper[5109]: I0219 00:10:08.895554 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:09 crc kubenswrapper[5109]: I0219 00:10:09.895117 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:10 crc kubenswrapper[5109]: I0219 00:10:10.894474 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.004250 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.005316 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.005389 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.005414 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.005456 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:11 crc kubenswrapper[5109]: E0219 00:10:11.019973 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:11 crc kubenswrapper[5109]: E0219 00:10:11.061851 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:11 crc kubenswrapper[5109]: I0219 00:10:11.895967 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:12 crc kubenswrapper[5109]: E0219 00:10:12.536441 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 19 00:10:12 crc kubenswrapper[5109]: I0219 00:10:12.895133 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.896142 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.990607 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.992165 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.992261 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.992283 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:13 crc kubenswrapper[5109]: E0219 00:10:13.992931 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:13 crc kubenswrapper[5109]: I0219 00:10:13.993353 5109 scope.go:117] "RemoveContainer" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" Feb 19 00:10:14 crc kubenswrapper[5109]: E0219 00:10:14.004108 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4761e2bdcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4761e2bdcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.978557903 +0000 UTC m=+3.814797882,LastTimestamp:2026-02-19 00:10:13.995314204 +0000 UTC m=+43.831554223,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:14 crc kubenswrapper[5109]: E0219 00:10:14.207044 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d476f0be702\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476f0be702 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.199359234 +0000 UTC m=+4.035599243,LastTimestamp:2026-02-19 00:10:14.1990337 +0000 UTC m=+44.035273739,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:14 crc kubenswrapper[5109]: E0219 00:10:14.219047 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d476fdbfd82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d476fdbfd82 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:34.212996482 +0000 UTC m=+4.049236461,LastTimestamp:2026-02-19 00:10:14.211334972 +0000 UTC m=+44.047574991,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:14 crc kubenswrapper[5109]: E0219 00:10:14.554138 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:14 crc kubenswrapper[5109]: I0219 00:10:14.893672 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.201752 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.202655 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.205198 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" exitCode=255 Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.205255 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd"} Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.205286 5109 scope.go:117] "RemoveContainer" containerID="7d443b7f7893cac33624de5813fea28e213043cb614a2c1a00d4e7412a39e897" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.205591 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.206423 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.206475 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.206494 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:15 crc kubenswrapper[5109]: E0219 00:10:15.207181 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.207568 5109 scope.go:117] "RemoveContainer" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" Feb 19 00:10:15 crc kubenswrapper[5109]: E0219 00:10:15.207909 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:15 crc kubenswrapper[5109]: E0219 00:10:15.216879 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4bd7ee2875\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:10:15.207856513 +0000 UTC m=+45.044096532,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:15 crc kubenswrapper[5109]: I0219 00:10:15.895282 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.116005 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.211103 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.214910 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.215756 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.215815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.215834 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:16 crc kubenswrapper[5109]: E0219 00:10:16.216472 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.216931 5109 scope.go:117] "RemoveContainer" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" Feb 19 00:10:16 crc kubenswrapper[5109]: E0219 00:10:16.217254 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:16 crc kubenswrapper[5109]: E0219 00:10:16.225790 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4bd7ee2875\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:10:16.21719888 +0000 UTC m=+46.053438899,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:16 crc kubenswrapper[5109]: I0219 00:10:16.894730 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:17 crc kubenswrapper[5109]: E0219 00:10:17.536868 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 19 00:10:17 crc kubenswrapper[5109]: E0219 00:10:17.809573 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 19 00:10:17 crc kubenswrapper[5109]: I0219 00:10:17.892664 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.020318 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.021381 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.021419 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.021430 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.021452 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:18 crc kubenswrapper[5109]: E0219 00:10:18.029331 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:18 crc kubenswrapper[5109]: I0219 00:10:18.895465 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:19 crc kubenswrapper[5109]: I0219 00:10:19.895094 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:20 crc kubenswrapper[5109]: I0219 00:10:20.898038 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:21 crc kubenswrapper[5109]: E0219 00:10:21.063178 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:21 crc kubenswrapper[5109]: E0219 00:10:21.562716 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:21 crc kubenswrapper[5109]: I0219 00:10:21.895193 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.131175 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.131584 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.133913 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.133989 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.134016 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:22 crc kubenswrapper[5109]: E0219 00:10:22.134865 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.135373 5109 scope.go:117] "RemoveContainer" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" Feb 19 00:10:22 crc kubenswrapper[5109]: E0219 00:10:22.136010 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:22 crc kubenswrapper[5109]: E0219 00:10:22.144442 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4bd7ee2875\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4bd7ee2875 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:53.138886773 +0000 UTC m=+22.975126762,LastTimestamp:2026-02-19 00:10:22.135942927 +0000 UTC m=+51.972182956,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:22 crc kubenswrapper[5109]: I0219 00:10:22.895687 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:23 crc kubenswrapper[5109]: E0219 00:10:23.064515 5109 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 19 00:10:23 crc kubenswrapper[5109]: I0219 00:10:23.894777 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:24 crc kubenswrapper[5109]: I0219 00:10:24.895206 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.030435 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.031830 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.031894 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.031913 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.031951 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:25 crc kubenswrapper[5109]: E0219 00:10:25.047491 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:25 crc kubenswrapper[5109]: I0219 00:10:25.896074 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.089088 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.089496 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.090918 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.091055 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.091099 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:26 crc kubenswrapper[5109]: E0219 00:10:26.091826 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:26 crc kubenswrapper[5109]: I0219 00:10:26.895393 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:27 crc kubenswrapper[5109]: I0219 00:10:27.894764 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:28 crc kubenswrapper[5109]: E0219 00:10:28.570905 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:28 crc kubenswrapper[5109]: I0219 00:10:28.895597 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:29 crc kubenswrapper[5109]: I0219 00:10:29.896834 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:30 crc kubenswrapper[5109]: I0219 00:10:30.895385 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:31 crc kubenswrapper[5109]: E0219 00:10:31.063737 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:31 crc kubenswrapper[5109]: I0219 00:10:31.893318 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.049548 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.051718 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.051829 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.051859 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.051906 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:32 crc kubenswrapper[5109]: E0219 00:10:32.067384 5109 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 19 00:10:32 crc kubenswrapper[5109]: I0219 00:10:32.896983 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:33 crc kubenswrapper[5109]: I0219 00:10:33.893756 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:34 crc kubenswrapper[5109]: I0219 00:10:34.894559 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:35 crc kubenswrapper[5109]: E0219 00:10:35.579254 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.894522 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.991249 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.992546 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.992864 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.993052 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:35 crc kubenswrapper[5109]: E0219 00:10:35.994198 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:35 crc kubenswrapper[5109]: I0219 00:10:35.995009 5109 scope.go:117] "RemoveContainer" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" Feb 19 00:10:36 crc kubenswrapper[5109]: E0219 00:10:36.005362 5109 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18957d4761e2bdcf\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18957d4761e2bdcf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:09:33.978557903 +0000 UTC m=+3.814797882,LastTimestamp:2026-02-19 00:10:35.996975183 +0000 UTC m=+65.833215183,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.271119 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.273451 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99"} Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.273696 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.274230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.274295 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.274314 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:36 crc kubenswrapper[5109]: E0219 00:10:36.274844 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:36 crc kubenswrapper[5109]: I0219 00:10:36.894577 5109 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.067244 5109 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-fw5tx" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.077601 5109 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-fw5tx" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.130624 5109 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.277230 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.277822 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.278948 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" exitCode=255 Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.279000 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99"} Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.279030 5109 scope.go:117] "RemoveContainer" containerID="2ce1655f2c27588feb282812404b26276dbcbb8418da7fa5422976183c962afd" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.279391 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.280038 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.280073 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.280086 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:37 crc kubenswrapper[5109]: E0219 00:10:37.280475 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.280701 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:10:37 crc kubenswrapper[5109]: E0219 00:10:37.280865 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:37 crc kubenswrapper[5109]: I0219 00:10:37.743321 5109 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 19 00:10:38 crc kubenswrapper[5109]: I0219 00:10:38.078807 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-21 00:05:37 +0000 UTC" deadline="2026-03-16 03:27:18.34540328 +0000 UTC" Feb 19 00:10:38 crc kubenswrapper[5109]: I0219 00:10:38.078894 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="603h16m40.266517191s" Feb 19 00:10:38 crc kubenswrapper[5109]: I0219 00:10:38.283084 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.067531 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.068716 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.068788 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.068816 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.069004 5109 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.080346 5109 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.080615 5109 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.080654 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.083902 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.083967 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.083997 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.084032 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.084057 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.106126 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.114796 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.114835 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.114844 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.114857 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.114866 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.122982 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.132956 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.133002 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.133015 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.133034 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.133047 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.147893 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.156108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.156143 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.156154 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.156171 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:39 crc kubenswrapper[5109]: I0219 00:10:39.156184 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:39Z","lastTransitionTime":"2026-02-19T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.171352 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.171513 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.171541 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.272599 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.373419 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.473697 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.574784 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.675678 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.775787 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.875905 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:39 crc kubenswrapper[5109]: E0219 00:10:39.976933 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.077902 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.178360 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.278420 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.379386 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.480280 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.581260 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.682245 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.782594 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.883216 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:40 crc kubenswrapper[5109]: E0219 00:10:40.983607 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.064372 5109 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.084625 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.185057 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.285543 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.386159 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.487055 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.587495 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.688446 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.788810 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.889731 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:41 crc kubenswrapper[5109]: E0219 00:10:41.989851 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.090704 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.191184 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.292322 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.393468 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.494339 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.594531 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.695496 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.796204 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.896659 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:42 crc kubenswrapper[5109]: E0219 00:10:42.997282 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.097683 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.198552 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.299749 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.400364 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.500744 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.601841 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.702904 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.804115 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:43 crc kubenswrapper[5109]: E0219 00:10:43.905014 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.005382 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.106516 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.207834 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.308551 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.409538 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.510478 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.611449 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.712169 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.812863 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:44 crc kubenswrapper[5109]: E0219 00:10:44.913454 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.014256 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.115409 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.216371 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.316698 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.416998 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.517169 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.617741 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.718792 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.819134 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:45 crc kubenswrapper[5109]: E0219 00:10:45.920264 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.020598 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.116546 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.116790 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.117684 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.117744 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.117767 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.118373 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.118787 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.119093 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.121580 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.222260 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.274435 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.304676 5109 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.305741 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.305814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.305834 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.306606 5109 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 19 00:10:46 crc kubenswrapper[5109]: I0219 00:10:46.307045 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.307363 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.323008 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.424208 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.525021 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.625977 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.726313 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.826941 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:46 crc kubenswrapper[5109]: E0219 00:10:46.928053 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.029174 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.129553 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.230006 5109 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.268607 5109 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.313340 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.326443 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.332249 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.332286 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.332298 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.332317 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.332331 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.427239 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.434573 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.434663 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.434684 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.434708 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.434725 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.528604 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.536773 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.536810 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.536822 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.536838 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.536850 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.628938 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.639230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.639269 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.639281 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.639297 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.639339 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.742136 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.742229 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.742290 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.742315 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.742333 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.845073 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.845126 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.845137 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.845152 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.845163 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.914346 5109 apiserver.go:52] "Watching apiserver" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.926680 5109 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.927264 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-dns/node-resolver-bjs9p","openshift-etcd/etcd-crc","openshift-machine-config-operator/machine-config-daemon-ntpdt","openshift-multus/network-metrics-daemon-scmsj","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94","openshift-ovn-kubernetes/ovnkube-node-bgfm9","openshift-image-registry/node-ca-cltq5","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-htkb9","openshift-multus/multus-ctz69","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6"] Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.928615 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.929226 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.929372 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.930558 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.930720 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.930592 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.930808 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.931713 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.931848 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.933123 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.933136 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.933255 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.933142 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.933358 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.934017 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.934017 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.934676 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.935052 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.947710 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.947788 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.947816 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.947849 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.947874 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:47Z","lastTransitionTime":"2026-02-19T00:10:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.948913 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.952077 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.952087 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.952195 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.952100 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.952205 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.953078 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.955258 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.955355 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.956376 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.961903 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.964117 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.964430 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.964509 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.964424 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.964700 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.968594 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.970837 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.971613 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.972458 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.972597 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.972747 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.974569 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.974889 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.975158 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.976038 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.979989 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.981673 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.981900 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.982013 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.984045 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.985704 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.985815 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.985835 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.985879 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.986122 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.986246 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.986965 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ctz69" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.987749 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.988398 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.988856 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:10:47 crc kubenswrapper[5109]: E0219 00:10:47.989139 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.989192 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 19 00:10:47 crc kubenswrapper[5109]: I0219 00:10:47.998515 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.007101 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.014676 5109 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.016054 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.026099 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.037414 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.046542 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.050108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.050149 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.050161 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.050179 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.050191 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.057101 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.065814 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.076519 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.089341 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.100682 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.109791 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.109859 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.109900 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.109932 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.109965 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110039 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110084 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110118 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110147 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110179 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110210 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110244 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110277 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110310 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110339 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110370 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110400 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110433 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110463 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110494 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110528 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110558 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110588 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110618 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110673 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110706 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110716 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110741 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110791 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110823 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110774 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110917 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.110966 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111035 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111072 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111103 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111139 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111229 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111262 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111296 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111330 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111363 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111395 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111430 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111891 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.111989 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.112042 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.112080 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.112114 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.112151 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.112184 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113049 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113131 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113194 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113247 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113510 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113722 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113797 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113836 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.113873 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114020 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114352 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114416 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114667 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114908 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114936 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.115058 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.114959 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.115745 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.115847 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116025 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116032 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116115 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116315 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116406 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116517 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116570 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116609 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116651 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116677 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116524 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116699 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116720 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116808 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116814 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116838 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116862 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116883 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116906 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116928 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116950 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116933 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116971 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116997 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117030 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117534 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117716 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117749 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117780 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117811 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117838 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117876 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117907 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117937 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117969 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118001 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118032 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118061 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118091 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118121 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118150 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118190 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118225 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118256 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118288 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118321 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118351 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118382 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119182 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119242 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119296 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119545 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119572 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119593 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119617 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119654 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119538 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119723 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119744 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119765 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119889 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119912 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119930 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119947 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119969 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119990 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120017 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120036 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120053 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120070 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120098 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120117 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120159 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.116987 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117256 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117797 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117846 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117878 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117934 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118022 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.117966 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118209 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118270 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118324 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118350 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118840 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.118890 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119707 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119976 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.120190 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.620160485 +0000 UTC m=+78.456400504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120775 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.121291 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.121564 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.121840 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.121929 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.121981 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122199 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122302 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122338 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122393 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122623 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.122731 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.123107 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.123356 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.123380 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.120239 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.119753 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.123569 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.124595 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.124976 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125077 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125344 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125425 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125500 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125518 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.124981 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.126511 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.126541 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.126584 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.126921 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.127068 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.127519 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.128139 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.128385 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.128858 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.128946 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.129113 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.129515 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.129796 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.129985 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.130501 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.130586 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.130775 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131195 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131210 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.125525 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131334 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131623 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131719 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.131776 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.132245 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.132303 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.132682 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.132923 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133316 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133338 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133383 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133397 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133446 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133494 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133550 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133603 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133700 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133582 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133792 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133752 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133916 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.133964 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134002 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134048 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134106 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134156 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134191 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134207 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134259 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134310 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134339 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134378 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134421 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134456 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134533 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134565 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134575 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134736 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134742 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134779 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134820 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134858 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134903 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.134936 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135051 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135049 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135088 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135127 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135149 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135164 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135200 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135235 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135269 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135307 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135344 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135381 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135423 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135687 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135745 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135782 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135797 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135816 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.135985 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136080 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136136 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136183 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136213 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136356 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136947 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137015 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137348 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137436 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137581 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137709 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137764 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137821 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137880 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137931 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137984 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138043 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138096 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138154 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138214 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138269 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138327 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138382 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138436 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138487 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138540 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138612 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138710 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138774 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138829 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138879 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138933 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139030 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139086 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139148 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139200 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139253 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139313 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139368 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139441 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139498 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139554 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139620 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139722 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139788 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139845 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139900 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139960 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140013 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140067 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140129 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140183 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140238 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140300 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140361 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140417 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140490 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140849 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140892 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140916 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140942 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140966 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140993 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141016 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141068 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141098 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141169 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42e68a30-b704-4b69-b682-602323a8ead0-hosts-file\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141193 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141212 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141235 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-netns\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141257 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-hostroot\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141295 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141318 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141345 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141368 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141396 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141421 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141446 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141466 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-os-release\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141487 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-conf-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141512 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141535 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141555 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141578 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea82223b-3009-45c2-bf16-6037e4f81188-host\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141605 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llz75\" (UniqueName: \"kubernetes.io/projected/ea82223b-3009-45c2-bf16-6037e4f81188-kube-api-access-llz75\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141726 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-os-release\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141766 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141793 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mc4c\" (UniqueName: \"kubernetes.io/projected/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-kube-api-access-5mc4c\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141820 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-multus\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141841 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-kubelet\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141872 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141895 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141917 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141941 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141966 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141989 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxzg\" (UniqueName: \"kubernetes.io/projected/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-kube-api-access-fvxzg\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142010 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142039 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mndtm\" (UniqueName: \"kubernetes.io/projected/42e68a30-b704-4b69-b682-602323a8ead0-kube-api-access-mndtm\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142095 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142122 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142143 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142167 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dwfg\" (UniqueName: \"kubernetes.io/projected/45b69efd-a181-4847-9934-8ea00d53e9fd-kube-api-access-8dwfg\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142194 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142216 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142238 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-socket-dir-parent\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142263 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-bin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142286 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-multus-certs\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142316 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142355 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142387 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-rootfs\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142414 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cnibin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142448 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142471 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-system-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142499 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-k8s-cni-cncf-io\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142532 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-daemon-config\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142576 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142614 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142663 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42e68a30-b704-4b69-b682-602323a8ead0-tmp-dir\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142687 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea82223b-3009-45c2-bf16-6037e4f81188-serviceca\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142710 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142742 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136208 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142771 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.136809 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137045 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137247 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137808 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.137664 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138318 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138461 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138587 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.138777 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143118 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139198 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139238 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139402 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139461 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139472 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139599 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139671 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139766 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139824 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.139860 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140077 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140222 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140230 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140245 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140232 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140254 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.140337 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141389 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141767 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.141949 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142052 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142089 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142151 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142214 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142552 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142554 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143219 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143501 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143251 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143795 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.143782 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144077 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144110 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144117 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144186 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144205 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144521 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144738 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.144797 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145058 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.142801 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145155 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145206 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-etc-kubernetes\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145333 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145500 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145536 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145586 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145739 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145758 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145909 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145980 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.145506 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.146038 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.146462 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.146914 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.146946 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.147204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.147381 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.147412 5109 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.147511 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.147795 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.147927 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.647906646 +0000 UTC m=+78.484146645 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.147964 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj2g9\" (UniqueName: \"kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148111 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-proxy-tls\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148144 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148193 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148192 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148234 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148305 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148342 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148449 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148746 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148783 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148399 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gc7q\" (UniqueName: \"kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.148897 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.148982 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.149059 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.649038479 +0000 UTC m=+78.485278468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.149075 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.149339 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.149420 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.149821 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.150621 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.150702 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.150871 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d54tt\" (UniqueName: \"kubernetes.io/projected/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-kube-api-access-d54tt\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.150911 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.150943 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151018 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-cnibin\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151103 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cni-binary-copy\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151281 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151310 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151332 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151351 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151368 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151401 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151421 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151438 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151455 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151471 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151489 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151507 5109 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151524 5109 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151539 5109 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151555 5109 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151572 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151589 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151607 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151623 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151669 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151687 5109 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151702 5109 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151718 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151734 5109 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151625 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151750 5109 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151791 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151807 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151820 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151833 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151844 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151855 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151869 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151882 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151895 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151906 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151917 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151929 5109 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151939 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151950 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151962 5109 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151974 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151987 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.151998 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152011 5109 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152022 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152034 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152045 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152058 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152080 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152091 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152102 5109 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152114 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152125 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152142 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152154 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152168 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152218 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152233 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152246 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152258 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152269 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152281 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152291 5109 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152304 5109 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152317 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152329 5109 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152342 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152353 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152364 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152375 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152387 5109 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152399 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152410 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152423 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152435 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152447 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152459 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152471 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152484 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152495 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152505 5109 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152516 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152527 5109 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152538 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152550 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152562 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152573 5109 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152584 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152595 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152608 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152619 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152646 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152669 5109 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.152681 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153213 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153226 5109 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153237 5109 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153251 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153263 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153276 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153287 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153298 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153310 5109 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153323 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153334 5109 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153346 5109 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153358 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153372 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153393 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153405 5109 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153417 5109 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153428 5109 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153439 5109 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153450 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153471 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153483 5109 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153494 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153506 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153518 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153529 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153540 5109 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153551 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153561 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153571 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153583 5109 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153596 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153607 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153619 5109 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153646 5109 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153658 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153672 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153686 5109 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153697 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153709 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153721 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153732 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153744 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153756 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153768 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153779 5109 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153790 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153801 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153813 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153825 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153838 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153851 5109 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153863 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153875 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153888 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153903 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153914 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153926 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153937 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153947 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153960 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153971 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153983 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.153993 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154004 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154015 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154026 5109 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154036 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154047 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154060 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154072 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154083 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154095 5109 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154106 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154117 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154132 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154143 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154155 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154166 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154177 5109 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154188 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154199 5109 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154210 5109 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154222 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154234 5109 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154246 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154257 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154272 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154571 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154582 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154597 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154722 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154745 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.154764 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.159195 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.159763 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.159803 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.160354 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.161288 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.161546 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.162038 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.162434 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.164074 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.164168 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.164600 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.166162 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.166308 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.166333 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.166350 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.166450 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.66642491 +0000 UTC m=+78.502664919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.167468 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.168612 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.169179 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.169695 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.170440 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.170577 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.170594 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.170134 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.171669 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172110 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172166 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172353 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172421 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172428 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172523 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172795 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.172866 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173054 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173092 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173129 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173392 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173471 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173534 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.173982 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.174053 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.174709 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.174754 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.174787 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.174807 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.174884 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.674858354 +0000 UTC m=+78.511098353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.175616 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.180150 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.180313 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.184888 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.187397 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.193597 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.199865 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.208180 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.211851 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.213053 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.220849 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.231219 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.240426 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.247617 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.254648 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.254949 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.254994 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255018 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-rootfs\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255023 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255077 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-rootfs\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255142 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cnibin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255167 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-system-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255204 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-k8s-cni-cncf-io\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255221 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-daemon-config\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255249 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42e68a30-b704-4b69-b682-602323a8ead0-tmp-dir\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255265 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-system-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255286 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cnibin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255293 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-k8s-cni-cncf-io\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255314 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea82223b-3009-45c2-bf16-6037e4f81188-serviceca\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255398 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255436 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255452 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-etc-kubernetes\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255475 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-etc-kubernetes\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255483 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255506 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255524 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kj2g9\" (UniqueName: \"kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255545 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-proxy-tls\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255559 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255575 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255599 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5gc7q\" (UniqueName: \"kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255613 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255641 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255659 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d54tt\" (UniqueName: \"kubernetes.io/projected/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-kube-api-access-d54tt\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255678 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255694 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255708 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-cnibin\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255721 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cni-binary-copy\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255737 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42e68a30-b704-4b69-b682-602323a8ead0-hosts-file\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255751 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255765 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255779 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-netns\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255793 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-hostroot\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255814 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255831 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255845 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255860 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255880 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-os-release\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255894 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-conf-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255912 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255934 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255939 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255955 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255989 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea82223b-3009-45c2-bf16-6037e4f81188-host\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256015 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-llz75\" (UniqueName: \"kubernetes.io/projected/ea82223b-3009-45c2-bf16-6037e4f81188-kube-api-access-llz75\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256037 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-os-release\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256062 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256085 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5mc4c\" (UniqueName: \"kubernetes.io/projected/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-kube-api-access-5mc4c\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256107 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-multus\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256127 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-kubelet\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256171 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-kubelet\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256202 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ea82223b-3009-45c2-bf16-6037e4f81188-host\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256289 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256369 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-os-release\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256402 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256439 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.256506 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.256555 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:10:48.756540531 +0000 UTC m=+78.592780520 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256583 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-multus\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256588 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-daemon-config\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256662 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256688 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256726 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256732 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256738 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.255878 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/42e68a30-b704-4b69-b682-602323a8ead0-tmp-dir\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256742 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-cni-binary-copy\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256745 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256817 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256843 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxzg\" (UniqueName: \"kubernetes.io/projected/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-kube-api-access-fvxzg\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256906 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-system-cni-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256934 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.256950 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257065 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257105 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-cnibin\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257150 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/42e68a30-b704-4b69-b682-602323a8ead0-hosts-file\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257180 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257208 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257238 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-netns\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257238 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-mcd-auth-proxy-config\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257293 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-hostroot\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257339 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-conf-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257485 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257530 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257559 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257665 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b69efd-a181-4847-9934-8ea00d53e9fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.257777 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258150 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/ea82223b-3009-45c2-bf16-6037e4f81188-serviceca\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258337 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-os-release\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258374 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258400 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mndtm\" (UniqueName: \"kubernetes.io/projected/42e68a30-b704-4b69-b682-602323a8ead0-kube-api-access-mndtm\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258422 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258450 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258471 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258492 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258661 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-cni-dir\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258671 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258493 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8dwfg\" (UniqueName: \"kubernetes.io/projected/45b69efd-a181-4847-9934-8ea00d53e9fd-kube-api-access-8dwfg\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258708 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258713 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258735 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258742 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258756 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-socket-dir-parent\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-bin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258790 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258798 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-multus-certs\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258929 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258947 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258959 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258972 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258986 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.258999 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259011 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259023 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259035 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259047 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259059 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259075 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259089 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259101 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259106 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-multus-socket-dir-parent\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259114 5109 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259140 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-run-multus-certs\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259155 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259167 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259176 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259177 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259201 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259216 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259228 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259242 5109 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259253 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259290 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259304 5109 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259318 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259331 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259343 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259183 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-host-var-lib-cni-bin\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259355 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259402 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259426 5109 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259440 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259490 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259504 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259514 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259522 5109 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259684 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259696 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259751 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259770 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259781 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.259878 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-proxy-tls\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.260677 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.260705 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.260718 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.260736 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.260748 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.264658 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.265112 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/45b69efd-a181-4847-9934-8ea00d53e9fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.269607 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.270315 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.272063 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.273089 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-llz75\" (UniqueName: \"kubernetes.io/projected/ea82223b-3009-45c2-bf16-6037e4f81188-kube-api-access-llz75\") pod \"node-ca-cltq5\" (UID: \"ea82223b-3009-45c2-bf16-6037e4f81188\") " pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.273382 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mc4c\" (UniqueName: \"kubernetes.io/projected/3dd0092b-65e0-496b-aad5-33d7ca9ca9d6-kube-api-access-5mc4c\") pod \"machine-config-daemon-ntpdt\" (UID: \"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\") " pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.276962 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gc7q\" (UniqueName: \"kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q\") pod \"ovnkube-control-plane-57b78d8988-9cp94\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.276956 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj2g9\" (UniqueName: \"kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9\") pod \"ovnkube-node-bgfm9\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.280489 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dwfg\" (UniqueName: \"kubernetes.io/projected/45b69efd-a181-4847-9934-8ea00d53e9fd-kube-api-access-8dwfg\") pod \"multus-additional-cni-plugins-htkb9\" (UID: \"45b69efd-a181-4847-9934-8ea00d53e9fd\") " pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.280568 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d54tt\" (UniqueName: \"kubernetes.io/projected/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-kube-api-access-d54tt\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.280948 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.282507 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:48 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 19 00:10:48 crc kubenswrapper[5109]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 19 00:10:48 crc kubenswrapper[5109]: ho_enable="--enable-hybrid-overlay" Feb 19 00:10:48 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 19 00:10:48 crc kubenswrapper[5109]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 19 00:10:48 crc kubenswrapper[5109]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-host=127.0.0.1 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-port=9743 \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ho_enable} \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:10:48 crc kubenswrapper[5109]: --disable-approver \ Feb 19 00:10:48 crc kubenswrapper[5109]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --wait-for-kubernetes-api=200s \ Feb 19 00:10:48 crc kubenswrapper[5109]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.285418 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:48 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --disable-webhook \ Feb 19 00:10:48 crc kubenswrapper[5109]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.286253 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mndtm\" (UniqueName: \"kubernetes.io/projected/42e68a30-b704-4b69-b682-602323a8ead0-kube-api-access-mndtm\") pod \"node-resolver-bjs9p\" (UID: \"42e68a30-b704-4b69-b682-602323a8ead0\") " pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.286695 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.288515 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.289448 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxzg\" (UniqueName: \"kubernetes.io/projected/9d3c36ec-d151-4cb3-8bcb-931c2665a1e7-kube-api-access-fvxzg\") pod \"multus-ctz69\" (UID: \"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\") " pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.294616 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-bjs9p" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.298095 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dd0092b_65e0_496b_aad5_33d7ca9ca9d6.slice/crio-12a1c5a975cea86a9a67f8a291cfa071d3be5e5b678d6e0602d1dca79cf28964 WatchSource:0}: Error finding container 12a1c5a975cea86a9a67f8a291cfa071d3be5e5b678d6e0602d1dca79cf28964: Status 404 returned error can't find the container with id 12a1c5a975cea86a9a67f8a291cfa071d3be5e5b678d6e0602d1dca79cf28964 Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.300510 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.302261 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.303133 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.304835 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.305172 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42e68a30_b704_4b69_b682_602323a8ead0.slice/crio-d18e2409874f65e9f50ebcc5d5cbe0e75a3bcea240f89040e024bb50611d1c9c WatchSource:0}: Error finding container d18e2409874f65e9f50ebcc5d5cbe0e75a3bcea240f89040e024bb50611d1c9c: Status 404 returned error can't find the container with id d18e2409874f65e9f50ebcc5d5cbe0e75a3bcea240f89040e024bb50611d1c9c Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.307757 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:48 crc kubenswrapper[5109]: set -uo pipefail Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 19 00:10:48 crc kubenswrapper[5109]: HOSTS_FILE="/etc/hosts" Feb 19 00:10:48 crc kubenswrapper[5109]: TEMP_FILE="/tmp/hosts.tmp" Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Make a temporary file with the old hosts file's attributes. Feb 19 00:10:48 crc kubenswrapper[5109]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 19 00:10:48 crc kubenswrapper[5109]: echo "Failed to preserve hosts file. Exiting." Feb 19 00:10:48 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: while true; do Feb 19 00:10:48 crc kubenswrapper[5109]: declare -A svc_ips Feb 19 00:10:48 crc kubenswrapper[5109]: for svc in "${services[@]}"; do Feb 19 00:10:48 crc kubenswrapper[5109]: # Fetch service IP from cluster dns if present. We make several tries Feb 19 00:10:48 crc kubenswrapper[5109]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 19 00:10:48 crc kubenswrapper[5109]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 19 00:10:48 crc kubenswrapper[5109]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 19 00:10:48 crc kubenswrapper[5109]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 19 00:10:48 crc kubenswrapper[5109]: for i in ${!cmds[*]} Feb 19 00:10:48 crc kubenswrapper[5109]: do Feb 19 00:10:48 crc kubenswrapper[5109]: ips=($(eval "${cmds[i]}")) Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: svc_ips["${svc}"]="${ips[@]}" Feb 19 00:10:48 crc kubenswrapper[5109]: break Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Update /etc/hosts only if we get valid service IPs Feb 19 00:10:48 crc kubenswrapper[5109]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 19 00:10:48 crc kubenswrapper[5109]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 19 00:10:48 crc kubenswrapper[5109]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 19 00:10:48 crc kubenswrapper[5109]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: continue Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Append resolver entries for services Feb 19 00:10:48 crc kubenswrapper[5109]: rc=0 Feb 19 00:10:48 crc kubenswrapper[5109]: for svc in "${!svc_ips[@]}"; do Feb 19 00:10:48 crc kubenswrapper[5109]: for ip in ${svc_ips[${svc}]}; do Feb 19 00:10:48 crc kubenswrapper[5109]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ $rc -ne 0 ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: continue Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 19 00:10:48 crc kubenswrapper[5109]: # Replace /etc/hosts with our modified version if needed Feb 19 00:10:48 crc kubenswrapper[5109]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 19 00:10:48 crc kubenswrapper[5109]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: unset svc_ips Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mndtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bjs9p_openshift-dns(42e68a30-b704-4b69-b682-602323a8ead0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.308371 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-cltq5" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.308847 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bjs9p" podUID="42e68a30-b704-4b69-b682-602323a8ead0" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.311540 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjs9p" event={"ID":"42e68a30-b704-4b69-b682-602323a8ead0","Type":"ContainerStarted","Data":"d18e2409874f65e9f50ebcc5d5cbe0e75a3bcea240f89040e024bb50611d1c9c"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.313197 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"12a1c5a975cea86a9a67f8a291cfa071d3be5e5b678d6e0602d1dca79cf28964"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.315157 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"89459ad10dffb5fec1813f5edabc7d1980c7c184366c5f4b7a6e011e1e9a95dd"} Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.315828 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:48 crc kubenswrapper[5109]: set -uo pipefail Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 19 00:10:48 crc kubenswrapper[5109]: HOSTS_FILE="/etc/hosts" Feb 19 00:10:48 crc kubenswrapper[5109]: TEMP_FILE="/tmp/hosts.tmp" Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Make a temporary file with the old hosts file's attributes. Feb 19 00:10:48 crc kubenswrapper[5109]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 19 00:10:48 crc kubenswrapper[5109]: echo "Failed to preserve hosts file. Exiting." Feb 19 00:10:48 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: while true; do Feb 19 00:10:48 crc kubenswrapper[5109]: declare -A svc_ips Feb 19 00:10:48 crc kubenswrapper[5109]: for svc in "${services[@]}"; do Feb 19 00:10:48 crc kubenswrapper[5109]: # Fetch service IP from cluster dns if present. We make several tries Feb 19 00:10:48 crc kubenswrapper[5109]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 19 00:10:48 crc kubenswrapper[5109]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 19 00:10:48 crc kubenswrapper[5109]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 19 00:10:48 crc kubenswrapper[5109]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:10:48 crc kubenswrapper[5109]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 19 00:10:48 crc kubenswrapper[5109]: for i in ${!cmds[*]} Feb 19 00:10:48 crc kubenswrapper[5109]: do Feb 19 00:10:48 crc kubenswrapper[5109]: ips=($(eval "${cmds[i]}")) Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: svc_ips["${svc}"]="${ips[@]}" Feb 19 00:10:48 crc kubenswrapper[5109]: break Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Update /etc/hosts only if we get valid service IPs Feb 19 00:10:48 crc kubenswrapper[5109]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 19 00:10:48 crc kubenswrapper[5109]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 19 00:10:48 crc kubenswrapper[5109]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 19 00:10:48 crc kubenswrapper[5109]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: continue Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Append resolver entries for services Feb 19 00:10:48 crc kubenswrapper[5109]: rc=0 Feb 19 00:10:48 crc kubenswrapper[5109]: for svc in "${!svc_ips[@]}"; do Feb 19 00:10:48 crc kubenswrapper[5109]: for ip in ${svc_ips[${svc}]}; do Feb 19 00:10:48 crc kubenswrapper[5109]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ $rc -ne 0 ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: continue Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 19 00:10:48 crc kubenswrapper[5109]: # Replace /etc/hosts with our modified version if needed Feb 19 00:10:48 crc kubenswrapper[5109]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 19 00:10:48 crc kubenswrapper[5109]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:10:48 crc kubenswrapper[5109]: unset svc_ips Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mndtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bjs9p_openshift-dns(42e68a30-b704-4b69-b682-602323a8ead0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.315891 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2955042f_e905_4bd8_893a_97e7c9723fca.slice/crio-6bb581bc8ecefe4984214d4b56f5c6b8603839085b55a0e81dc2e4cac8eb01a5 WatchSource:0}: Error finding container 6bb581bc8ecefe4984214d4b56f5c6b8603839085b55a0e81dc2e4cac8eb01a5: Status 404 returned error can't find the container with id 6bb581bc8ecefe4984214d4b56f5c6b8603839085b55a0e81dc2e4cac8eb01a5 Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.315893 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-htkb9" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.316302 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.316461 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:48 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 19 00:10:48 crc kubenswrapper[5109]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 19 00:10:48 crc kubenswrapper[5109]: ho_enable="--enable-hybrid-overlay" Feb 19 00:10:48 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 19 00:10:48 crc kubenswrapper[5109]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 19 00:10:48 crc kubenswrapper[5109]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-host=127.0.0.1 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --webhook-port=9743 \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ho_enable} \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:10:48 crc kubenswrapper[5109]: --disable-approver \ Feb 19 00:10:48 crc kubenswrapper[5109]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --wait-for-kubernetes-api=200s \ Feb 19 00:10:48 crc kubenswrapper[5109]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.317219 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bjs9p" podUID="42e68a30-b704-4b69-b682-602323a8ead0" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.318279 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:48 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --disable-webhook \ Feb 19 00:10:48 crc kubenswrapper[5109]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.318704 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 19 00:10:48 crc kubenswrapper[5109]: apiVersion: v1 Feb 19 00:10:48 crc kubenswrapper[5109]: clusters: Feb 19 00:10:48 crc kubenswrapper[5109]: - cluster: Feb 19 00:10:48 crc kubenswrapper[5109]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 19 00:10:48 crc kubenswrapper[5109]: server: https://api-int.crc.testing:6443 Feb 19 00:10:48 crc kubenswrapper[5109]: name: default-cluster Feb 19 00:10:48 crc kubenswrapper[5109]: contexts: Feb 19 00:10:48 crc kubenswrapper[5109]: - context: Feb 19 00:10:48 crc kubenswrapper[5109]: cluster: default-cluster Feb 19 00:10:48 crc kubenswrapper[5109]: namespace: default Feb 19 00:10:48 crc kubenswrapper[5109]: user: default-auth Feb 19 00:10:48 crc kubenswrapper[5109]: name: default-context Feb 19 00:10:48 crc kubenswrapper[5109]: current-context: default-context Feb 19 00:10:48 crc kubenswrapper[5109]: kind: Config Feb 19 00:10:48 crc kubenswrapper[5109]: preferences: {} Feb 19 00:10:48 crc kubenswrapper[5109]: users: Feb 19 00:10:48 crc kubenswrapper[5109]: - name: default-auth Feb 19 00:10:48 crc kubenswrapper[5109]: user: Feb 19 00:10:48 crc kubenswrapper[5109]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:48 crc kubenswrapper[5109]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:48 crc kubenswrapper[5109]: EOF Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj2g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfm9_openshift-ovn-kubernetes(2955042f-e905-4bd8-893a-97e7c9723fca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.319015 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.319626 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.320693 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.320763 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.320815 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-ctz69" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.323398 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea82223b_3009_45c2_bf16_6037e4f81188.slice/crio-a25f42b1df3e35e4107267795fab52f527a0b6aa52e0e14b8a50ef4a1f936b0f WatchSource:0}: Error finding container a25f42b1df3e35e4107267795fab52f527a0b6aa52e0e14b8a50ef4a1f936b0f: Status 404 returned error can't find the container with id a25f42b1df3e35e4107267795fab52f527a0b6aa52e0e14b8a50ef4a1f936b0f Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.323727 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.328742 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 19 00:10:48 crc kubenswrapper[5109]: while [ true ]; Feb 19 00:10:48 crc kubenswrapper[5109]: do Feb 19 00:10:48 crc kubenswrapper[5109]: for f in $(ls /tmp/serviceca); do Feb 19 00:10:48 crc kubenswrapper[5109]: echo $f Feb 19 00:10:48 crc kubenswrapper[5109]: ca_file_path="/tmp/serviceca/${f}" Feb 19 00:10:48 crc kubenswrapper[5109]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 19 00:10:48 crc kubenswrapper[5109]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 19 00:10:48 crc kubenswrapper[5109]: if [ -e "${reg_dir_path}" ]; then Feb 19 00:10:48 crc kubenswrapper[5109]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:48 crc kubenswrapper[5109]: else Feb 19 00:10:48 crc kubenswrapper[5109]: mkdir $reg_dir_path Feb 19 00:10:48 crc kubenswrapper[5109]: cp $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: for d in $(ls /etc/docker/certs.d); do Feb 19 00:10:48 crc kubenswrapper[5109]: echo $d Feb 19 00:10:48 crc kubenswrapper[5109]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 19 00:10:48 crc kubenswrapper[5109]: reg_conf_path="/tmp/serviceca/${dp}" Feb 19 00:10:48 crc kubenswrapper[5109]: if [ ! -e "${reg_conf_path}" ]; then Feb 19 00:10:48 crc kubenswrapper[5109]: rm -rf /etc/docker/certs.d/$d Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 60 & wait ${!} Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llz75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-cltq5_openshift-image-registry(ea82223b-3009-45c2-bf16-6037e4f81188): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.330426 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-cltq5" podUID="ea82223b-3009-45c2-bf16-6037e4f81188" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.333586 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b69efd_a181_4847_9934_8ea00d53e9fd.slice/crio-5e82d51e8b7e6122b33e23731412d96496f8490c65279bc5930a4159abaab897 WatchSource:0}: Error finding container 5e82d51e8b7e6122b33e23731412d96496f8490c65279bc5930a4159abaab897: Status 404 returned error can't find the container with id 5e82d51e8b7e6122b33e23731412d96496f8490c65279bc5930a4159abaab897 Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.336294 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dwfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-htkb9_openshift-multus(45b69efd-a181-4847-9934-8ea00d53e9fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.336843 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.337477 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-htkb9" podUID="45b69efd-a181-4847-9934-8ea00d53e9fd" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.337861 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 19 00:10:48 crc kubenswrapper[5109]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 19 00:10:48 crc kubenswrapper[5109]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvxzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-ctz69_openshift-multus(9d3c36ec-d151-4cb3-8bcb-931c2665a1e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.339366 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-ctz69" podUID="9d3c36ec-d151-4cb3-8bcb-931c2665a1e7" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.347121 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.362152 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.363947 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.364017 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.364043 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.364076 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.364101 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.371203 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.379220 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.385084 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.400149 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.441708 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.466090 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.466168 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.466184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.466204 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.466218 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.495886 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.524730 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.543081 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.555126 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.567801 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34177974_8d82_49d2_a763_391d0df3bbd8.slice/crio-e7576cb4147d846a01ae11e7e5d2d10c7be3b9fef72df73818ee5804746352c3 WatchSource:0}: Error finding container e7576cb4147d846a01ae11e7e5d2d10c7be3b9fef72df73818ee5804746352c3: Status 404 returned error can't find the container with id e7576cb4147d846a01ae11e7e5d2d10c7be3b9fef72df73818ee5804746352c3 Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.568344 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.568387 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.568400 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.568418 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.568432 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.572529 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: source /etc/kubernetes/apiserver-url.env Feb 19 00:10:48 crc kubenswrapper[5109]: else Feb 19 00:10:48 crc kubenswrapper[5109]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 19 00:10:48 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.573128 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.573701 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.574042 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-5b9bfa05232cd0100dc9a9a2631172750168f50cdabac5c1e25d38db267919eb WatchSource:0}: Error finding container 5b9bfa05232cd0100dc9a9a2631172750168f50cdabac5c1e25d38db267919eb: Status 404 returned error can't find the container with id 5b9bfa05232cd0100dc9a9a2631172750168f50cdabac5c1e25d38db267919eb Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.575173 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.580771 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.582039 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 19 00:10:48 crc kubenswrapper[5109]: W0219 00:10:48.585155 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a1c588b_414d_4d41_94a6_b74745ffd8c9.slice/crio-1ec5a4dd74b6f09d1465fa4a18e0a36b9172edc2820bed79e1b65b26efe9c091 WatchSource:0}: Error finding container 1ec5a4dd74b6f09d1465fa4a18e0a36b9172edc2820bed79e1b65b26efe9c091: Status 404 returned error can't find the container with id 1ec5a4dd74b6f09d1465fa4a18e0a36b9172edc2820bed79e1b65b26efe9c091 Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.587184 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:48 crc kubenswrapper[5109]: set -euo pipefail Feb 19 00:10:48 crc kubenswrapper[5109]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 19 00:10:48 crc kubenswrapper[5109]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 19 00:10:48 crc kubenswrapper[5109]: # As the secret mount is optional we must wait for the files to be present. Feb 19 00:10:48 crc kubenswrapper[5109]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 19 00:10:48 crc kubenswrapper[5109]: TS=$(date +%s) Feb 19 00:10:48 crc kubenswrapper[5109]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 19 00:10:48 crc kubenswrapper[5109]: HAS_LOGGED_INFO=0 Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: log_missing_certs(){ Feb 19 00:10:48 crc kubenswrapper[5109]: CUR_TS=$(date +%s) Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 19 00:10:48 crc kubenswrapper[5109]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 19 00:10:48 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 19 00:10:48 crc kubenswrapper[5109]: HAS_LOGGED_INFO=1 Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: } Feb 19 00:10:48 crc kubenswrapper[5109]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 19 00:10:48 crc kubenswrapper[5109]: log_missing_certs Feb 19 00:10:48 crc kubenswrapper[5109]: sleep 5 Feb 19 00:10:48 crc kubenswrapper[5109]: done Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/kube-rbac-proxy \ Feb 19 00:10:48 crc kubenswrapper[5109]: --logtostderr \ Feb 19 00:10:48 crc kubenswrapper[5109]: --secure-listen-address=:9108 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 19 00:10:48 crc kubenswrapper[5109]: --upstream=http://127.0.0.1:29108/ \ Feb 19 00:10:48 crc kubenswrapper[5109]: --tls-private-key-file=${TLS_PK} \ Feb 19 00:10:48 crc kubenswrapper[5109]: --tls-cert-file=${TLS_CERT} Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.590729 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:48 crc kubenswrapper[5109]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:48 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # This is needed so that converting clusters from GA to TP Feb 19 00:10:48 crc kubenswrapper[5109]: # will rollout control plane pods as well Feb 19 00:10:48 crc kubenswrapper[5109]: network_segmentation_enabled_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: multi_network_enabled_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "true" != "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: route_advertisements_enable_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Enable multi-network policy if configured (control-plane always full mode) Feb 19 00:10:48 crc kubenswrapper[5109]: multi_network_policy_enabled_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: # Enable admin network policy if configured (control-plane always full mode) Feb 19 00:10:48 crc kubenswrapper[5109]: admin_network_policy_enabled_flag= Feb 19 00:10:48 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:48 crc kubenswrapper[5109]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: if [ "shared" == "shared" ]; then Feb 19 00:10:48 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode shared" Feb 19 00:10:48 crc kubenswrapper[5109]: elif [ "shared" == "local" ]; then Feb 19 00:10:48 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode local" Feb 19 00:10:48 crc kubenswrapper[5109]: else Feb 19 00:10:48 crc kubenswrapper[5109]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 19 00:10:48 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:48 crc kubenswrapper[5109]: fi Feb 19 00:10:48 crc kubenswrapper[5109]: Feb 19 00:10:48 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 19 00:10:48 crc kubenswrapper[5109]: exec /usr/bin/ovnkube \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:10:48 crc kubenswrapper[5109]: --init-cluster-manager "${K8S_NODE}" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 19 00:10:48 crc kubenswrapper[5109]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --metrics-bind-address "127.0.0.1:29108" \ Feb 19 00:10:48 crc kubenswrapper[5109]: --metrics-enable-pprof \ Feb 19 00:10:48 crc kubenswrapper[5109]: --metrics-enable-config-duration \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ovn_v4_join_subnet_opt} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ovn_v6_join_subnet_opt} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${dns_name_resolver_enabled_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${persistent_ips_enabled_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${multi_network_enabled_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${network_segmentation_enabled_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${gateway_mode_flags} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${route_advertisements_enable_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${preconfigured_udn_addresses_enable_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-egress-ip=true \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-egress-firewall=true \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-egress-qos=true \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-egress-service=true \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-multicast \ Feb 19 00:10:48 crc kubenswrapper[5109]: --enable-multi-external-gateway=true \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${multi_network_policy_enabled_flag} \ Feb 19 00:10:48 crc kubenswrapper[5109]: ${admin_network_policy_enabled_flag} Feb 19 00:10:48 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:48 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.592362 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.602939 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.648070 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.664123 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.664291 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.664265847 +0000 UTC m=+79.500505846 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.664411 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.664485 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.664587 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.664670 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.664659768 +0000 UTC m=+79.500899767 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.664737 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.664900 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.664861924 +0000 UTC m=+79.501101953 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.670717 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.670771 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.670780 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.670795 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.670821 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.686887 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.727164 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.765627 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.765762 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.765806 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.765895 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.765964 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.765984 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766029 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766048 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766069 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766116 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.766096596 +0000 UTC m=+79.602336595 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.765994 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766172 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.766150237 +0000 UTC m=+79.602390266 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.766357 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:49.766314952 +0000 UTC m=+79.602554981 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.770952 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.772801 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.772915 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.772984 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.773017 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.773092 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.806869 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.842096 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.875007 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.875072 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.875083 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.875095 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.875105 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.883449 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.925453 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.962836 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.977057 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.977114 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.977133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.977161 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.977179 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:48Z","lastTransitionTime":"2026-02-19T00:10:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.990811 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:48 crc kubenswrapper[5109]: E0219 00:10:48.990991 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:48 crc kubenswrapper[5109]: I0219 00:10:48.998764 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.000084 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.003725 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.004104 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.006607 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.011366 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.016781 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.019016 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.021008 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.022201 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.024846 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.027047 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.029801 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.030480 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.032748 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.033187 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.034074 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.034749 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.036183 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.037596 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.039279 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.040400 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.043381 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.044102 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.045095 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.046881 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.047562 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.048871 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.050440 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.051736 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.053891 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.054396 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.056432 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.057675 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.060445 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.062441 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.063983 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.064948 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.066659 5109 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.066861 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.072416 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.075087 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.076802 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079000 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079781 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079838 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079889 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079907 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079934 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.079950 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.081405 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.082304 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.083536 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.084835 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.087284 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.089968 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.091782 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.094050 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.095882 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.096880 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.098155 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.100328 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.102574 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.104079 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.105227 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.106986 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.120530 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.169601 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.183164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.183293 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.183316 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.183341 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.183363 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.184278 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.184331 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.184350 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.184370 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.184386 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.205106 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.207862 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.209531 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.209581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.209594 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.209611 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.209622 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.222869 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.226288 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.226321 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.226329 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.226343 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.226351 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.234794 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.238521 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.238560 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.238577 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.238622 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.238657 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.245939 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.250111 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.253068 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.253202 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.253303 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.253426 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.253515 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.263921 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.264155 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.286044 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.286271 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.286398 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.286523 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.286665 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.287219 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.319077 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cltq5" event={"ID":"ea82223b-3009-45c2-bf16-6037e4f81188","Type":"ContainerStarted","Data":"a25f42b1df3e35e4107267795fab52f527a0b6aa52e0e14b8a50ef4a1f936b0f"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.320313 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e7576cb4147d846a01ae11e7e5d2d10c7be3b9fef72df73818ee5804746352c3"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.322087 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ctz69" event={"ID":"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7","Type":"ContainerStarted","Data":"84af33e4587b3c4bcb1a1cd3b894b5c86b0a772855fdc475b5194938376d5bf6"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.322580 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:49 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: source /etc/kubernetes/apiserver-url.env Feb 19 00:10:49 crc kubenswrapper[5109]: else Feb 19 00:10:49 crc kubenswrapper[5109]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 19 00:10:49 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 19 00:10:49 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.323193 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 19 00:10:49 crc kubenswrapper[5109]: while [ true ]; Feb 19 00:10:49 crc kubenswrapper[5109]: do Feb 19 00:10:49 crc kubenswrapper[5109]: for f in $(ls /tmp/serviceca); do Feb 19 00:10:49 crc kubenswrapper[5109]: echo $f Feb 19 00:10:49 crc kubenswrapper[5109]: ca_file_path="/tmp/serviceca/${f}" Feb 19 00:10:49 crc kubenswrapper[5109]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 19 00:10:49 crc kubenswrapper[5109]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 19 00:10:49 crc kubenswrapper[5109]: if [ -e "${reg_dir_path}" ]; then Feb 19 00:10:49 crc kubenswrapper[5109]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:49 crc kubenswrapper[5109]: else Feb 19 00:10:49 crc kubenswrapper[5109]: mkdir $reg_dir_path Feb 19 00:10:49 crc kubenswrapper[5109]: cp $ca_file_path $reg_dir_path/ca.crt Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: done Feb 19 00:10:49 crc kubenswrapper[5109]: for d in $(ls /etc/docker/certs.d); do Feb 19 00:10:49 crc kubenswrapper[5109]: echo $d Feb 19 00:10:49 crc kubenswrapper[5109]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 19 00:10:49 crc kubenswrapper[5109]: reg_conf_path="/tmp/serviceca/${dp}" Feb 19 00:10:49 crc kubenswrapper[5109]: if [ ! -e "${reg_conf_path}" ]; then Feb 19 00:10:49 crc kubenswrapper[5109]: rm -rf /etc/docker/certs.d/$d Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: done Feb 19 00:10:49 crc kubenswrapper[5109]: sleep 60 & wait ${!} Feb 19 00:10:49 crc kubenswrapper[5109]: done Feb 19 00:10:49 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llz75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-cltq5_openshift-image-registry(ea82223b-3009-45c2-bf16-6037e4f81188): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.324102 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.324302 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-cltq5" podUID="ea82223b-3009-45c2-bf16-6037e4f81188" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.324936 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 19 00:10:49 crc kubenswrapper[5109]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 19 00:10:49 crc kubenswrapper[5109]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvxzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-ctz69_openshift-multus(9d3c36ec-d151-4cb3-8bcb-931c2665a1e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.325319 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerStarted","Data":"5e82d51e8b7e6122b33e23731412d96496f8490c65279bc5930a4159abaab897"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.325562 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.326036 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-ctz69" podUID="9d3c36ec-d151-4cb3-8bcb-931c2665a1e7" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.326991 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"6bb581bc8ecefe4984214d4b56f5c6b8603839085b55a0e81dc2e4cac8eb01a5"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.327730 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dwfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-htkb9_openshift-multus(45b69efd-a181-4847-9934-8ea00d53e9fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.329092 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-htkb9" podUID="45b69efd-a181-4847-9934-8ea00d53e9fd" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.329518 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerStarted","Data":"1ec5a4dd74b6f09d1465fa4a18e0a36b9172edc2820bed79e1b65b26efe9c091"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.329602 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 19 00:10:49 crc kubenswrapper[5109]: apiVersion: v1 Feb 19 00:10:49 crc kubenswrapper[5109]: clusters: Feb 19 00:10:49 crc kubenswrapper[5109]: - cluster: Feb 19 00:10:49 crc kubenswrapper[5109]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 19 00:10:49 crc kubenswrapper[5109]: server: https://api-int.crc.testing:6443 Feb 19 00:10:49 crc kubenswrapper[5109]: name: default-cluster Feb 19 00:10:49 crc kubenswrapper[5109]: contexts: Feb 19 00:10:49 crc kubenswrapper[5109]: - context: Feb 19 00:10:49 crc kubenswrapper[5109]: cluster: default-cluster Feb 19 00:10:49 crc kubenswrapper[5109]: namespace: default Feb 19 00:10:49 crc kubenswrapper[5109]: user: default-auth Feb 19 00:10:49 crc kubenswrapper[5109]: name: default-context Feb 19 00:10:49 crc kubenswrapper[5109]: current-context: default-context Feb 19 00:10:49 crc kubenswrapper[5109]: kind: Config Feb 19 00:10:49 crc kubenswrapper[5109]: preferences: {} Feb 19 00:10:49 crc kubenswrapper[5109]: users: Feb 19 00:10:49 crc kubenswrapper[5109]: - name: default-auth Feb 19 00:10:49 crc kubenswrapper[5109]: user: Feb 19 00:10:49 crc kubenswrapper[5109]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:49 crc kubenswrapper[5109]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:10:49 crc kubenswrapper[5109]: EOF Feb 19 00:10:49 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj2g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfm9_openshift-ovn-kubernetes(2955042f-e905-4bd8-893a-97e7c9723fca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.330759 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.331737 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"5b9bfa05232cd0100dc9a9a2631172750168f50cdabac5c1e25d38db267919eb"} Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.331909 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 19 00:10:49 crc kubenswrapper[5109]: set -euo pipefail Feb 19 00:10:49 crc kubenswrapper[5109]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 19 00:10:49 crc kubenswrapper[5109]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 19 00:10:49 crc kubenswrapper[5109]: # As the secret mount is optional we must wait for the files to be present. Feb 19 00:10:49 crc kubenswrapper[5109]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 19 00:10:49 crc kubenswrapper[5109]: TS=$(date +%s) Feb 19 00:10:49 crc kubenswrapper[5109]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 19 00:10:49 crc kubenswrapper[5109]: HAS_LOGGED_INFO=0 Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: log_missing_certs(){ Feb 19 00:10:49 crc kubenswrapper[5109]: CUR_TS=$(date +%s) Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 19 00:10:49 crc kubenswrapper[5109]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 19 00:10:49 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 19 00:10:49 crc kubenswrapper[5109]: HAS_LOGGED_INFO=1 Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: } Feb 19 00:10:49 crc kubenswrapper[5109]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 19 00:10:49 crc kubenswrapper[5109]: log_missing_certs Feb 19 00:10:49 crc kubenswrapper[5109]: sleep 5 Feb 19 00:10:49 crc kubenswrapper[5109]: done Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 19 00:10:49 crc kubenswrapper[5109]: exec /usr/bin/kube-rbac-proxy \ Feb 19 00:10:49 crc kubenswrapper[5109]: --logtostderr \ Feb 19 00:10:49 crc kubenswrapper[5109]: --secure-listen-address=:9108 \ Feb 19 00:10:49 crc kubenswrapper[5109]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 19 00:10:49 crc kubenswrapper[5109]: --upstream=http://127.0.0.1:29108/ \ Feb 19 00:10:49 crc kubenswrapper[5109]: --tls-private-key-file=${TLS_PK} \ Feb 19 00:10:49 crc kubenswrapper[5109]: --tls-cert-file=${TLS_CERT} Feb 19 00:10:49 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.333076 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.334348 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.334778 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:10:49 crc kubenswrapper[5109]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: set -o allexport Feb 19 00:10:49 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:10:49 crc kubenswrapper[5109]: set +o allexport Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: # This is needed so that converting clusters from GA to TP Feb 19 00:10:49 crc kubenswrapper[5109]: # will rollout control plane pods as well Feb 19 00:10:49 crc kubenswrapper[5109]: network_segmentation_enabled_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: multi_network_enabled_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "true" != "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: route_advertisements_enable_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: # Enable multi-network policy if configured (control-plane always full mode) Feb 19 00:10:49 crc kubenswrapper[5109]: multi_network_policy_enabled_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: # Enable admin network policy if configured (control-plane always full mode) Feb 19 00:10:49 crc kubenswrapper[5109]: admin_network_policy_enabled_flag= Feb 19 00:10:49 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:10:49 crc kubenswrapper[5109]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: if [ "shared" == "shared" ]; then Feb 19 00:10:49 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode shared" Feb 19 00:10:49 crc kubenswrapper[5109]: elif [ "shared" == "local" ]; then Feb 19 00:10:49 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode local" Feb 19 00:10:49 crc kubenswrapper[5109]: else Feb 19 00:10:49 crc kubenswrapper[5109]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 19 00:10:49 crc kubenswrapper[5109]: exit 1 Feb 19 00:10:49 crc kubenswrapper[5109]: fi Feb 19 00:10:49 crc kubenswrapper[5109]: Feb 19 00:10:49 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 19 00:10:49 crc kubenswrapper[5109]: exec /usr/bin/ovnkube \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:10:49 crc kubenswrapper[5109]: --init-cluster-manager "${K8S_NODE}" \ Feb 19 00:10:49 crc kubenswrapper[5109]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 19 00:10:49 crc kubenswrapper[5109]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 19 00:10:49 crc kubenswrapper[5109]: --metrics-bind-address "127.0.0.1:29108" \ Feb 19 00:10:49 crc kubenswrapper[5109]: --metrics-enable-pprof \ Feb 19 00:10:49 crc kubenswrapper[5109]: --metrics-enable-config-duration \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${ovn_v4_join_subnet_opt} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${ovn_v6_join_subnet_opt} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${dns_name_resolver_enabled_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${persistent_ips_enabled_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${multi_network_enabled_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${network_segmentation_enabled_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${gateway_mode_flags} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${route_advertisements_enable_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${preconfigured_udn_addresses_enable_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-egress-ip=true \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-egress-firewall=true \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-egress-qos=true \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-egress-service=true \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-multicast \ Feb 19 00:10:49 crc kubenswrapper[5109]: --enable-multi-external-gateway=true \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${multi_network_policy_enabled_flag} \ Feb 19 00:10:49 crc kubenswrapper[5109]: ${admin_network_policy_enabled_flag} Feb 19 00:10:49 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:10:49 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.336209 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.368286 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.389171 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.389238 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.389259 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.389286 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.389312 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.407916 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.440469 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.487308 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.491975 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.492029 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.492044 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.492062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.492079 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.521882 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.566245 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.594856 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.594939 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.594964 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.594997 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.595036 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.623072 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.648962 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.678090 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.678238 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.678293 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.678259829 +0000 UTC m=+81.514499858 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.678334 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.678390 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.678399 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.678382282 +0000 UTC m=+81.514622301 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.678625 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.678805 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.678766373 +0000 UTC m=+81.515006402 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.686835 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.697809 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.697865 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.697884 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.697968 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.697991 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.724001 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.780212 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.780285 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.780332 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780424 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780467 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780488 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780505 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780604 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.780567891 +0000 UTC m=+81.616807930 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780629 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780698 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.780672564 +0000 UTC m=+81.616912583 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780706 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780733 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.780827 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:51.780801818 +0000 UTC m=+81.617041847 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.781875 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.800802 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.800881 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.800900 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.800926 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.800946 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.809069 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.847318 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.884397 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.903421 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.903509 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.903533 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.903563 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.903588 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:49Z","lastTransitionTime":"2026-02-19T00:10:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.925001 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.964734 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.990239 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.990516 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.990517 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.990692 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:49 crc kubenswrapper[5109]: I0219 00:10:49.990779 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:49 crc kubenswrapper[5109]: E0219 00:10:49.990963 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.006248 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.006298 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.006314 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.006361 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.006384 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.016992 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.044044 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.090749 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.108783 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.108911 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.108930 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.108954 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.108971 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.126007 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.166786 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.206871 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.211501 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.211562 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.211581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.211604 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.211626 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.245604 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.289200 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.314234 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.314327 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.314347 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.314376 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.314395 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.328470 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.363946 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.416333 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.416394 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.416411 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.416435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.416452 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.518515 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.518576 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.518589 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.518608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.518625 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.620946 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.621026 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.621069 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.621101 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.621123 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.723027 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.723092 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.723110 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.723133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.723152 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.825210 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.825277 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.825304 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.825332 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.825353 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.928294 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.928360 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.928382 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.928410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.928432 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:50Z","lastTransitionTime":"2026-02-19T00:10:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:50 crc kubenswrapper[5109]: I0219 00:10:50.990815 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:50 crc kubenswrapper[5109]: E0219 00:10:50.991086 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.021104 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.030626 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.030718 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.030746 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.030776 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.030799 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.038128 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.054595 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.068287 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.086425 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.099519 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.120901 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.129989 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.132705 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.132745 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.132759 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.132776 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.132788 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.143456 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.153821 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.164890 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.174111 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.182964 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.195757 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.210258 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.217253 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.227306 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.235090 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.235144 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.235155 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.235168 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.235178 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.237767 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.249511 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.337116 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.337199 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.337227 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.337258 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.337279 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.439565 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.439697 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.439722 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.439750 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.439770 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.542974 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.543052 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.543071 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.543098 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.543117 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.646067 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.646143 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.646170 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.646202 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.646226 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.703913 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.704089 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.704155 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.704323 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.704273714 +0000 UTC m=+85.540513763 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.704350 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.704460 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.704436109 +0000 UTC m=+85.540676108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.704335 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.704606 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.704594614 +0000 UTC m=+85.540834613 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.748517 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.748586 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.748599 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.748618 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.748658 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.805175 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.805266 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.805331 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805501 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805533 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805567 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805587 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.805561597 +0000 UTC m=+85.641801616 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805587 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805695 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805731 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805750 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805772 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.805746623 +0000 UTC m=+85.641986652 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.805881 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:10:55.805866646 +0000 UTC m=+85.642106675 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.850559 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.850667 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.850695 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.850735 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.850760 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.954909 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.955007 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.955025 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.955049 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.955067 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:51Z","lastTransitionTime":"2026-02-19T00:10:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.990327 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.990509 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:51 crc kubenswrapper[5109]: I0219 00:10:51.990574 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.990525 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.990819 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:51 crc kubenswrapper[5109]: E0219 00:10:51.991033 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.057940 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.058023 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.058049 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.058085 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.058110 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.160901 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.160985 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.161009 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.161035 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.161056 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.264122 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.264180 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.264197 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.264264 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.264284 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.366468 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.366523 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.366536 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.366553 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.366565 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.468510 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.468608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.468625 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.468682 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.468746 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.571464 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.571542 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.571567 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.571601 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.571624 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.674488 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.674535 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.674547 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.674563 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.674574 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.777817 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.777895 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.777916 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.777946 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.777972 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.880834 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.880876 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.880886 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.880903 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.880912 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.983608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.983715 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.983738 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.983793 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.983814 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:52Z","lastTransitionTime":"2026-02-19T00:10:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:52 crc kubenswrapper[5109]: I0219 00:10:52.991025 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:52 crc kubenswrapper[5109]: E0219 00:10:52.991171 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.086602 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.086686 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.086708 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.086731 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.086750 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.189371 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.189439 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.189457 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.189485 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.189506 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.291990 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.292045 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.292057 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.292078 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.292090 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.394146 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.394230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.394255 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.394283 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.394301 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.497487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.497561 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.497586 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.497616 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.497680 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.600537 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.600629 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.600702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.600732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.600820 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.703619 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.703722 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.703741 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.703767 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.703785 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.806807 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.806906 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.806926 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.806951 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.806969 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.909907 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.910006 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.910031 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.910055 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.910072 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:53Z","lastTransitionTime":"2026-02-19T00:10:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.991053 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.991095 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:53 crc kubenswrapper[5109]: I0219 00:10:53.991068 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:53 crc kubenswrapper[5109]: E0219 00:10:53.991291 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:53 crc kubenswrapper[5109]: E0219 00:10:53.991434 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:53 crc kubenswrapper[5109]: E0219 00:10:53.991601 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.012758 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.012837 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.012855 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.012882 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.012901 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.115971 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.116050 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.116075 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.116106 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.116130 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.219285 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.219344 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.219364 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.219389 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.219406 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.321912 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.321968 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.321987 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.322011 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.322029 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.424874 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.424952 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.424979 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.425009 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.425030 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.528413 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.528473 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.528490 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.528513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.528531 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.631136 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.631204 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.631222 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.631247 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.631265 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.733754 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.733820 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.733845 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.733877 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.733900 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.837026 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.837135 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.837158 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.837197 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.837220 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.940379 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.940487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.940512 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.940556 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.940581 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:54Z","lastTransitionTime":"2026-02-19T00:10:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:54 crc kubenswrapper[5109]: I0219 00:10:54.990794 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:54 crc kubenswrapper[5109]: E0219 00:10:54.990980 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.043726 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.043815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.043847 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.043884 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.043907 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.146971 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.147035 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.147056 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.147081 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.147099 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.249156 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.249219 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.249237 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.249261 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.249279 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.351062 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.351412 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.351435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.351459 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.351480 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.453670 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.453734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.453755 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.453779 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.453797 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.556799 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.556862 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.556878 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.556898 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.556913 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.659726 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.659810 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.659831 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.659856 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.659880 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.752713 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.752984 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.75293529 +0000 UTC m=+93.589175319 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.753189 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.753353 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.753476 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.753486 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.753581 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.753551667 +0000 UTC m=+93.589791696 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.753707 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.753603319 +0000 UTC m=+93.589843338 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.762173 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.762223 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.762232 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.762248 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.762258 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.854438 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.854510 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.854557 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854739 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854799 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854798 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854827 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854907 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.854882902 +0000 UTC m=+93.691122931 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854952 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.854996 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.855024 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.855034 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.855003155 +0000 UTC m=+93.691243184 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.855155 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:03.855133579 +0000 UTC m=+93.691373648 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.864862 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.864926 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.864946 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.864972 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.864990 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.968364 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.968432 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.968450 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.968476 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.968495 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:55Z","lastTransitionTime":"2026-02-19T00:10:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.990196 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.990476 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.990492 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:55 crc kubenswrapper[5109]: I0219 00:10:55.990548 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.990731 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:55 crc kubenswrapper[5109]: E0219 00:10:55.990877 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.071946 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.072018 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.072036 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.072061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.072079 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.175317 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.175389 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.175407 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.175438 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.175455 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.277892 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.277960 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.277978 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.278002 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.278020 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.380161 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.380228 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.380246 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.380271 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.380291 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.482702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.482781 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.482808 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.482838 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.482861 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.585858 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.585935 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.585964 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.585996 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.586019 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.688244 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.688336 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.688356 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.688383 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.688402 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.791860 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.791936 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.791954 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.791980 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.791998 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.894799 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.894885 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.894906 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.894932 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.894954 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.901423 5109 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.991299 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:56 crc kubenswrapper[5109]: E0219 00:10:56.991549 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.997118 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.997182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.997202 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.997226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:56 crc kubenswrapper[5109]: I0219 00:10:56.997249 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:56Z","lastTransitionTime":"2026-02-19T00:10:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.099070 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.099144 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.099172 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.099201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.099224 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.201387 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.201433 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.201446 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.201463 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.201475 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.304684 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.304781 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.304809 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.304899 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.304936 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.407272 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.407342 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.407361 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.407389 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.407408 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.511278 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.511369 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.511395 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.511427 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.511449 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.613890 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.613968 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.613989 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.614016 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.614041 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.716200 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.716296 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.716321 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.716350 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.716372 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.818551 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.818608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.818627 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.818702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.818742 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.921582 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.921792 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.921819 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.921843 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.921861 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:57Z","lastTransitionTime":"2026-02-19T00:10:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.990428 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.990893 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:57 crc kubenswrapper[5109]: I0219 00:10:57.990966 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:57 crc kubenswrapper[5109]: E0219 00:10:57.991055 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:57 crc kubenswrapper[5109]: E0219 00:10:57.990883 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:57 crc kubenswrapper[5109]: E0219 00:10:57.991221 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.024172 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.024265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.024285 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.024312 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.024330 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.126836 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.126909 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.126933 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.126958 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.126977 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.229195 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.229257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.229276 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.229301 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.229319 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.331868 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.331962 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.331980 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.332004 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.332022 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.434926 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.435040 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.435061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.435095 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.435117 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.537616 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.537727 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.537752 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.537783 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.537805 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.639919 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.639962 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.639976 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.639993 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.640005 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.742108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.742214 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.742239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.742289 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.742313 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.844454 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.844534 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.844556 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.844581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.844598 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.947165 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.947248 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.947273 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.947303 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.947320 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:58Z","lastTransitionTime":"2026-02-19T00:10:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.991770 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:10:58 crc kubenswrapper[5109]: E0219 00:10:58.991972 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:10:58 crc kubenswrapper[5109]: I0219 00:10:58.992338 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:10:58 crc kubenswrapper[5109]: E0219 00:10:58.992815 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.051419 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.051495 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.051509 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.051554 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.051570 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:59Z","lastTransitionTime":"2026-02-19T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.153509 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.153590 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.153603 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.153621 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.153687 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:59Z","lastTransitionTime":"2026-02-19T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.256698 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.256771 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.256790 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.256815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.256836 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:59Z","lastTransitionTime":"2026-02-19T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.359384 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.359475 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.359504 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.359537 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.359566 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:10:59Z","lastTransitionTime":"2026-02-19T00:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.990859 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:10:59 crc kubenswrapper[5109]: E0219 00:10:59.991086 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.991211 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:10:59 crc kubenswrapper[5109]: E0219 00:10:59.991341 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:10:59 crc kubenswrapper[5109]: I0219 00:10:59.992432 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:10:59 crc kubenswrapper[5109]: E0219 00:10:59.992548 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:10:59 crc kubenswrapper[5109]: E0219 00:10:59.995122 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8dwfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-htkb9_openshift-multus(45b69efd-a181-4847-9934-8ea00d53e9fd): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:10:59 crc kubenswrapper[5109]: E0219 00:10:59.996470 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-htkb9" podUID="45b69efd-a181-4847-9934-8ea00d53e9fd" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.519093 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.519147 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.519166 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.519195 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.519214 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.537049 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.542627 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.542754 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.542774 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.542801 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.542821 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.557882 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.563119 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.563201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.563252 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.563283 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.563301 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.578902 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.583373 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.583436 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.583455 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.583483 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.583502 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.597781 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.602573 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.602681 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.602707 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.602732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.602749 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.617240 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.617487 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.619140 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.619257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.619285 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.619316 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.619342 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.722109 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.722183 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.722208 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.722242 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.722264 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.825290 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.825340 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.825352 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.825369 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.825382 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.928381 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.928466 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.928492 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.928525 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.928548 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:00Z","lastTransitionTime":"2026-02-19T00:11:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:00 crc kubenswrapper[5109]: I0219 00:11:00.990735 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.990949 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.992812 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:00 crc kubenswrapper[5109]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:11:00 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:11:00 crc kubenswrapper[5109]: set -o allexport Feb 19 00:11:00 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:11:00 crc kubenswrapper[5109]: set +o allexport Feb 19 00:11:00 crc kubenswrapper[5109]: fi Feb 19 00:11:00 crc kubenswrapper[5109]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 19 00:11:00 crc kubenswrapper[5109]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 19 00:11:00 crc kubenswrapper[5109]: ho_enable="--enable-hybrid-overlay" Feb 19 00:11:00 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 19 00:11:00 crc kubenswrapper[5109]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 19 00:11:00 crc kubenswrapper[5109]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 19 00:11:00 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:11:00 crc kubenswrapper[5109]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 19 00:11:00 crc kubenswrapper[5109]: --webhook-host=127.0.0.1 \ Feb 19 00:11:00 crc kubenswrapper[5109]: --webhook-port=9743 \ Feb 19 00:11:00 crc kubenswrapper[5109]: ${ho_enable} \ Feb 19 00:11:00 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:11:00 crc kubenswrapper[5109]: --disable-approver \ Feb 19 00:11:00 crc kubenswrapper[5109]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 19 00:11:00 crc kubenswrapper[5109]: --wait-for-kubernetes-api=200s \ Feb 19 00:11:00 crc kubenswrapper[5109]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 19 00:11:00 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:11:00 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:00 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.993064 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.994200 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.994709 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:00 crc kubenswrapper[5109]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 19 00:11:00 crc kubenswrapper[5109]: set -euo pipefail Feb 19 00:11:00 crc kubenswrapper[5109]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 19 00:11:00 crc kubenswrapper[5109]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 19 00:11:00 crc kubenswrapper[5109]: # As the secret mount is optional we must wait for the files to be present. Feb 19 00:11:00 crc kubenswrapper[5109]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 19 00:11:00 crc kubenswrapper[5109]: TS=$(date +%s) Feb 19 00:11:00 crc kubenswrapper[5109]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 19 00:11:00 crc kubenswrapper[5109]: HAS_LOGGED_INFO=0 Feb 19 00:11:00 crc kubenswrapper[5109]: Feb 19 00:11:00 crc kubenswrapper[5109]: log_missing_certs(){ Feb 19 00:11:00 crc kubenswrapper[5109]: CUR_TS=$(date +%s) Feb 19 00:11:00 crc kubenswrapper[5109]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 19 00:11:00 crc kubenswrapper[5109]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 19 00:11:00 crc kubenswrapper[5109]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 19 00:11:00 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 19 00:11:00 crc kubenswrapper[5109]: HAS_LOGGED_INFO=1 Feb 19 00:11:00 crc kubenswrapper[5109]: fi Feb 19 00:11:00 crc kubenswrapper[5109]: } Feb 19 00:11:00 crc kubenswrapper[5109]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 19 00:11:00 crc kubenswrapper[5109]: log_missing_certs Feb 19 00:11:00 crc kubenswrapper[5109]: sleep 5 Feb 19 00:11:00 crc kubenswrapper[5109]: done Feb 19 00:11:00 crc kubenswrapper[5109]: Feb 19 00:11:00 crc kubenswrapper[5109]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 19 00:11:00 crc kubenswrapper[5109]: exec /usr/bin/kube-rbac-proxy \ Feb 19 00:11:00 crc kubenswrapper[5109]: --logtostderr \ Feb 19 00:11:00 crc kubenswrapper[5109]: --secure-listen-address=:9108 \ Feb 19 00:11:00 crc kubenswrapper[5109]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 19 00:11:00 crc kubenswrapper[5109]: --upstream=http://127.0.0.1:29108/ \ Feb 19 00:11:00 crc kubenswrapper[5109]: --tls-private-key-file=${TLS_PK} \ Feb 19 00:11:00 crc kubenswrapper[5109]: --tls-cert-file=${TLS_CERT} Feb 19 00:11:00 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:00 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:00 crc kubenswrapper[5109]: E0219 00:11:00.995874 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:00 crc kubenswrapper[5109]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:11:00 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:11:00 crc kubenswrapper[5109]: set -o allexport Feb 19 00:11:00 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:11:00 crc kubenswrapper[5109]: set +o allexport Feb 19 00:11:00 crc kubenswrapper[5109]: fi Feb 19 00:11:00 crc kubenswrapper[5109]: Feb 19 00:11:00 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 19 00:11:00 crc kubenswrapper[5109]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 19 00:11:00 crc kubenswrapper[5109]: --disable-webhook \ Feb 19 00:11:00 crc kubenswrapper[5109]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 19 00:11:00 crc kubenswrapper[5109]: --loglevel="${LOGLEVEL}" Feb 19 00:11:00 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:00 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:00.997042 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:00.997205 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:01 crc kubenswrapper[5109]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ -f "/env/_master" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: set -o allexport Feb 19 00:11:01 crc kubenswrapper[5109]: source "/env/_master" Feb 19 00:11:01 crc kubenswrapper[5109]: set +o allexport Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "" != "" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # This is needed so that converting clusters from GA to TP Feb 19 00:11:01 crc kubenswrapper[5109]: # will rollout control plane pods as well Feb 19 00:11:01 crc kubenswrapper[5109]: network_segmentation_enabled_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: multi_network_enabled_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "true" != "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: multi_network_enabled_flag="--enable-multi-network" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: route_advertisements_enable_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # Enable multi-network policy if configured (control-plane always full mode) Feb 19 00:11:01 crc kubenswrapper[5109]: multi_network_policy_enabled_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "false" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # Enable admin network policy if configured (control-plane always full mode) Feb 19 00:11:01 crc kubenswrapper[5109]: admin_network_policy_enabled_flag= Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "true" == "true" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: if [ "shared" == "shared" ]; then Feb 19 00:11:01 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode shared" Feb 19 00:11:01 crc kubenswrapper[5109]: elif [ "shared" == "local" ]; then Feb 19 00:11:01 crc kubenswrapper[5109]: gateway_mode_flags="--gateway-mode local" Feb 19 00:11:01 crc kubenswrapper[5109]: else Feb 19 00:11:01 crc kubenswrapper[5109]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 19 00:11:01 crc kubenswrapper[5109]: exit 1 Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 19 00:11:01 crc kubenswrapper[5109]: exec /usr/bin/ovnkube \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-interconnect \ Feb 19 00:11:01 crc kubenswrapper[5109]: --init-cluster-manager "${K8S_NODE}" \ Feb 19 00:11:01 crc kubenswrapper[5109]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 19 00:11:01 crc kubenswrapper[5109]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 19 00:11:01 crc kubenswrapper[5109]: --metrics-bind-address "127.0.0.1:29108" \ Feb 19 00:11:01 crc kubenswrapper[5109]: --metrics-enable-pprof \ Feb 19 00:11:01 crc kubenswrapper[5109]: --metrics-enable-config-duration \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${ovn_v4_join_subnet_opt} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${ovn_v6_join_subnet_opt} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${dns_name_resolver_enabled_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${persistent_ips_enabled_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${multi_network_enabled_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${network_segmentation_enabled_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${gateway_mode_flags} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${route_advertisements_enable_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${preconfigured_udn_addresses_enable_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-egress-ip=true \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-egress-firewall=true \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-egress-qos=true \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-egress-service=true \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-multicast \ Feb 19 00:11:01 crc kubenswrapper[5109]: --enable-multi-external-gateway=true \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${multi_network_policy_enabled_flag} \ Feb 19 00:11:01 crc kubenswrapper[5109]: ${admin_network_policy_enabled_flag} Feb 19 00:11:01 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gc7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-9cp94_openshift-ovn-kubernetes(5a1c588b-414d-4d41-94a6-b74745ffd8c9): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:01 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:00.999027 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.006499 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.033707 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.033756 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.033770 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.033786 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.033797 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.049386 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.066988 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.092080 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.104951 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.115583 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.126323 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.135948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.136029 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.136043 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.136087 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.136105 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.136662 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.149191 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.160586 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.169919 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.180989 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.191648 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.202191 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.225234 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.237983 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.238043 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.238143 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.238155 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.238173 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.238184 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.248449 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.258795 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.270419 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.341115 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.341237 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.341258 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.341284 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.341303 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.443379 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.443729 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.443914 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.444079 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.444222 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.547342 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.547419 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.547442 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.547471 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.547489 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.649670 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.649719 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.649731 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.649749 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.649762 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.752226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.752597 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.752612 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.752647 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.752664 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.854406 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.854449 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.854461 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.854544 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.854557 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.956811 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.956890 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.956918 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.956967 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.956994 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:01Z","lastTransitionTime":"2026-02-19T00:11:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.990263 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.990443 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:01 crc kubenswrapper[5109]: I0219 00:11:01.991018 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:01.991032 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:01.991239 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:01.991434 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:01.993385 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:01 crc kubenswrapper[5109]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 19 00:11:01 crc kubenswrapper[5109]: set -uo pipefail Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 19 00:11:01 crc kubenswrapper[5109]: HOSTS_FILE="/etc/hosts" Feb 19 00:11:01 crc kubenswrapper[5109]: TEMP_FILE="/tmp/hosts.tmp" Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # Make a temporary file with the old hosts file's attributes. Feb 19 00:11:01 crc kubenswrapper[5109]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 19 00:11:01 crc kubenswrapper[5109]: echo "Failed to preserve hosts file. Exiting." Feb 19 00:11:01 crc kubenswrapper[5109]: exit 1 Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: while true; do Feb 19 00:11:01 crc kubenswrapper[5109]: declare -A svc_ips Feb 19 00:11:01 crc kubenswrapper[5109]: for svc in "${services[@]}"; do Feb 19 00:11:01 crc kubenswrapper[5109]: # Fetch service IP from cluster dns if present. We make several tries Feb 19 00:11:01 crc kubenswrapper[5109]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 19 00:11:01 crc kubenswrapper[5109]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 19 00:11:01 crc kubenswrapper[5109]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 19 00:11:01 crc kubenswrapper[5109]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:11:01 crc kubenswrapper[5109]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:11:01 crc kubenswrapper[5109]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 19 00:11:01 crc kubenswrapper[5109]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 19 00:11:01 crc kubenswrapper[5109]: for i in ${!cmds[*]} Feb 19 00:11:01 crc kubenswrapper[5109]: do Feb 19 00:11:01 crc kubenswrapper[5109]: ips=($(eval "${cmds[i]}")) Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: svc_ips["${svc}"]="${ips[@]}" Feb 19 00:11:01 crc kubenswrapper[5109]: break Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: done Feb 19 00:11:01 crc kubenswrapper[5109]: done Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # Update /etc/hosts only if we get valid service IPs Feb 19 00:11:01 crc kubenswrapper[5109]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 19 00:11:01 crc kubenswrapper[5109]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 19 00:11:01 crc kubenswrapper[5109]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 19 00:11:01 crc kubenswrapper[5109]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 19 00:11:01 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:11:01 crc kubenswrapper[5109]: continue Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # Append resolver entries for services Feb 19 00:11:01 crc kubenswrapper[5109]: rc=0 Feb 19 00:11:01 crc kubenswrapper[5109]: for svc in "${!svc_ips[@]}"; do Feb 19 00:11:01 crc kubenswrapper[5109]: for ip in ${svc_ips[${svc}]}; do Feb 19 00:11:01 crc kubenswrapper[5109]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 19 00:11:01 crc kubenswrapper[5109]: done Feb 19 00:11:01 crc kubenswrapper[5109]: done Feb 19 00:11:01 crc kubenswrapper[5109]: if [[ $rc -ne 0 ]]; then Feb 19 00:11:01 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:11:01 crc kubenswrapper[5109]: continue Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: Feb 19 00:11:01 crc kubenswrapper[5109]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 19 00:11:01 crc kubenswrapper[5109]: # Replace /etc/hosts with our modified version if needed Feb 19 00:11:01 crc kubenswrapper[5109]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 19 00:11:01 crc kubenswrapper[5109]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 19 00:11:01 crc kubenswrapper[5109]: fi Feb 19 00:11:01 crc kubenswrapper[5109]: sleep 60 & wait Feb 19 00:11:01 crc kubenswrapper[5109]: unset svc_ips Feb 19 00:11:01 crc kubenswrapper[5109]: done Feb 19 00:11:01 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mndtm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-bjs9p_openshift-dns(42e68a30-b704-4b69-b682-602323a8ead0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:01 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:01 crc kubenswrapper[5109]: E0219 00:11:01.994666 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-bjs9p" podUID="42e68a30-b704-4b69-b682-602323a8ead0" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.059910 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.059980 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.059998 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.060024 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.060042 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.162340 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.162403 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.162414 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.162437 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.162451 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.265690 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.265790 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.265824 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.265856 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.265882 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.369183 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.369239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.369255 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.369277 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.369292 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.471049 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.471139 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.471158 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.471188 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.471206 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.573364 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.573434 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.573458 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.573487 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.573509 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.676472 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.676537 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.676553 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.676575 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.676588 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.778876 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.778944 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.778956 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.778974 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.778986 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.882775 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.882867 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.882901 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.882938 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.882967 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.985619 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.985770 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.985799 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.985837 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.985906 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:02Z","lastTransitionTime":"2026-02-19T00:11:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:02 crc kubenswrapper[5109]: I0219 00:11:02.991246 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.992037 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.994305 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.994771 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:02 crc kubenswrapper[5109]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 19 00:11:02 crc kubenswrapper[5109]: while [ true ]; Feb 19 00:11:02 crc kubenswrapper[5109]: do Feb 19 00:11:02 crc kubenswrapper[5109]: for f in $(ls /tmp/serviceca); do Feb 19 00:11:02 crc kubenswrapper[5109]: echo $f Feb 19 00:11:02 crc kubenswrapper[5109]: ca_file_path="/tmp/serviceca/${f}" Feb 19 00:11:02 crc kubenswrapper[5109]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 19 00:11:02 crc kubenswrapper[5109]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 19 00:11:02 crc kubenswrapper[5109]: if [ -e "${reg_dir_path}" ]; then Feb 19 00:11:02 crc kubenswrapper[5109]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 19 00:11:02 crc kubenswrapper[5109]: else Feb 19 00:11:02 crc kubenswrapper[5109]: mkdir $reg_dir_path Feb 19 00:11:02 crc kubenswrapper[5109]: cp $ca_file_path $reg_dir_path/ca.crt Feb 19 00:11:02 crc kubenswrapper[5109]: fi Feb 19 00:11:02 crc kubenswrapper[5109]: done Feb 19 00:11:02 crc kubenswrapper[5109]: for d in $(ls /etc/docker/certs.d); do Feb 19 00:11:02 crc kubenswrapper[5109]: echo $d Feb 19 00:11:02 crc kubenswrapper[5109]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 19 00:11:02 crc kubenswrapper[5109]: reg_conf_path="/tmp/serviceca/${dp}" Feb 19 00:11:02 crc kubenswrapper[5109]: if [ ! -e "${reg_conf_path}" ]; then Feb 19 00:11:02 crc kubenswrapper[5109]: rm -rf /etc/docker/certs.d/$d Feb 19 00:11:02 crc kubenswrapper[5109]: fi Feb 19 00:11:02 crc kubenswrapper[5109]: done Feb 19 00:11:02 crc kubenswrapper[5109]: sleep 60 & wait ${!} Feb 19 00:11:02 crc kubenswrapper[5109]: done Feb 19 00:11:02 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llz75,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-cltq5_openshift-image-registry(ea82223b-3009-45c2-bf16-6037e4f81188): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:02 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.996036 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-cltq5" podUID="ea82223b-3009-45c2-bf16-6037e4f81188" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.997065 5109 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mc4c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-ntpdt_openshift-machine-config-operator(3dd0092b-65e0-496b-aad5-33d7ca9ca9d6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 19 00:11:02 crc kubenswrapper[5109]: E0219 00:11:02.998506 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.014361 5109 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.089157 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.089234 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.089260 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.089291 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.089315 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.192109 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.192182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.192200 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.192232 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.192250 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.295023 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.295095 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.295122 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.295154 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.295177 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.397049 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.397112 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.397131 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.397190 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.397218 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.499513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.499604 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.499673 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.499709 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.499730 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.603011 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.603066 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.603075 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.603095 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.603107 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.706507 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.706584 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.706603 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.706681 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.706712 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.754869 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.755141 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.755196 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.755159465 +0000 UTC m=+109.591399464 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.755370 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.755345 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.755571 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.755745 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.755714311 +0000 UTC m=+109.591954330 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.756294 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.756250017 +0000 UTC m=+109.592490206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.809540 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.809676 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.809710 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.809744 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.809770 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.856437 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.856563 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.856598 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856684 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856756 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856791 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856803 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856799 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856817 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.856783978 +0000 UTC m=+109.693023997 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856827 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856847 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856868 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.85685107 +0000 UTC m=+109.693091059 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.856919 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:19.856899321 +0000 UTC m=+109.693139320 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.912705 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.912769 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.912786 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.912808 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.912824 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:03Z","lastTransitionTime":"2026-02-19T00:11:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.991091 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.991296 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.991487 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:03 crc kubenswrapper[5109]: I0219 00:11:03.992093 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.992217 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.993172 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.994741 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:03 crc kubenswrapper[5109]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 19 00:11:03 crc kubenswrapper[5109]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 19 00:11:03 crc kubenswrapper[5109]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvxzg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-ctz69_openshift-multus(9d3c36ec-d151-4cb3-8bcb-931c2665a1e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:03 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.995827 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:03 crc kubenswrapper[5109]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 19 00:11:03 crc kubenswrapper[5109]: apiVersion: v1 Feb 19 00:11:03 crc kubenswrapper[5109]: clusters: Feb 19 00:11:03 crc kubenswrapper[5109]: - cluster: Feb 19 00:11:03 crc kubenswrapper[5109]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 19 00:11:03 crc kubenswrapper[5109]: server: https://api-int.crc.testing:6443 Feb 19 00:11:03 crc kubenswrapper[5109]: name: default-cluster Feb 19 00:11:03 crc kubenswrapper[5109]: contexts: Feb 19 00:11:03 crc kubenswrapper[5109]: - context: Feb 19 00:11:03 crc kubenswrapper[5109]: cluster: default-cluster Feb 19 00:11:03 crc kubenswrapper[5109]: namespace: default Feb 19 00:11:03 crc kubenswrapper[5109]: user: default-auth Feb 19 00:11:03 crc kubenswrapper[5109]: name: default-context Feb 19 00:11:03 crc kubenswrapper[5109]: current-context: default-context Feb 19 00:11:03 crc kubenswrapper[5109]: kind: Config Feb 19 00:11:03 crc kubenswrapper[5109]: preferences: {} Feb 19 00:11:03 crc kubenswrapper[5109]: users: Feb 19 00:11:03 crc kubenswrapper[5109]: - name: default-auth Feb 19 00:11:03 crc kubenswrapper[5109]: user: Feb 19 00:11:03 crc kubenswrapper[5109]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:11:03 crc kubenswrapper[5109]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 19 00:11:03 crc kubenswrapper[5109]: EOF Feb 19 00:11:03 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj2g9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-bgfm9_openshift-ovn-kubernetes(2955042f-e905-4bd8-893a-97e7c9723fca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:03 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.995839 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-ctz69" podUID="9d3c36ec-d151-4cb3-8bcb-931c2665a1e7" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.996127 5109 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 19 00:11:03 crc kubenswrapper[5109]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 19 00:11:03 crc kubenswrapper[5109]: set -o allexport Feb 19 00:11:03 crc kubenswrapper[5109]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 19 00:11:03 crc kubenswrapper[5109]: source /etc/kubernetes/apiserver-url.env Feb 19 00:11:03 crc kubenswrapper[5109]: else Feb 19 00:11:03 crc kubenswrapper[5109]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 19 00:11:03 crc kubenswrapper[5109]: exit 1 Feb 19 00:11:03 crc kubenswrapper[5109]: fi Feb 19 00:11:03 crc kubenswrapper[5109]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 19 00:11:03 crc kubenswrapper[5109]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 19 00:11:03 crc kubenswrapper[5109]: > logger="UnhandledError" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.997987 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 19 00:11:03 crc kubenswrapper[5109]: E0219 00:11:03.998035 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.016095 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.016164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.016184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.016210 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.016228 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.119410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.119478 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.119497 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.119536 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.119554 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.222195 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.222276 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.222300 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.222329 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.222353 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.325413 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.325518 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.325538 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.325581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.325601 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.428555 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.428625 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.428687 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.428712 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.428731 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.531357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.531430 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.531443 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.531459 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.531471 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.634806 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.634871 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.634893 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.634917 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.634935 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.737507 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.737545 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.737554 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.737567 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.737576 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.839184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.839254 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.839274 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.839300 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.839318 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.941498 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.941543 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.941552 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.941566 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.941578 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:04Z","lastTransitionTime":"2026-02-19T00:11:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:04 crc kubenswrapper[5109]: I0219 00:11:04.990973 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:04 crc kubenswrapper[5109]: E0219 00:11:04.991212 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.044201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.044243 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.044257 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.044276 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.044295 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.147212 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.147277 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.147296 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.147320 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.147336 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.249302 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.249357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.249374 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.249396 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.249413 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.351048 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.351104 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.351220 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.351258 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.351283 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.453955 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.454064 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.454089 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.454125 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.454150 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.557083 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.557182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.557210 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.557243 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.557265 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.660346 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.660459 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.660481 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.660512 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.660536 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.763072 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.763154 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.763167 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.763188 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.763201 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.865999 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.866085 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.866125 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.866156 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.866178 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.969217 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.969281 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.969299 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.969327 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.969345 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:05Z","lastTransitionTime":"2026-02-19T00:11:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.990875 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.990958 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:05 crc kubenswrapper[5109]: I0219 00:11:05.991068 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:05 crc kubenswrapper[5109]: E0219 00:11:05.991273 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:05 crc kubenswrapper[5109]: E0219 00:11:05.991386 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:05 crc kubenswrapper[5109]: E0219 00:11:05.991509 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.076697 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.076794 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.076814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.076848 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.076867 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.180079 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.180520 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.180720 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.180756 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.180774 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.283783 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.283872 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.283891 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.283920 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.283939 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.386448 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.386525 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.386545 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.386571 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.386589 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.489071 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.489144 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.489171 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.489203 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.489225 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.591329 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.591402 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.591425 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.591455 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.591478 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.693877 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.693944 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.693955 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.693977 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.693990 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.802063 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.802211 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.802233 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.802261 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.802284 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.904751 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.904818 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.904835 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.904863 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.904882 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:06Z","lastTransitionTime":"2026-02-19T00:11:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:06 crc kubenswrapper[5109]: I0219 00:11:06.991106 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:06 crc kubenswrapper[5109]: E0219 00:11:06.991703 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.007513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.007574 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.007589 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.007611 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.007627 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.109826 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.109931 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.109982 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.110008 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.110027 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.212185 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.212268 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.212292 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.212317 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.212334 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.314561 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.314616 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.314655 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.314678 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.314693 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.416540 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.416598 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.416615 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.416664 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.416682 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.519251 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.519324 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.519343 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.519446 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.519469 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.622671 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.623091 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.623365 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.623516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.623761 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.726110 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.726240 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.726269 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.726301 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.726323 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.828516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.828569 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.828586 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.828609 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.828627 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.930932 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.931008 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.931027 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.931055 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.931073 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:07Z","lastTransitionTime":"2026-02-19T00:11:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.991075 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.991206 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:07 crc kubenswrapper[5109]: E0219 00:11:07.991508 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:07 crc kubenswrapper[5109]: I0219 00:11:07.991242 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:07 crc kubenswrapper[5109]: E0219 00:11:07.991734 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:07 crc kubenswrapper[5109]: E0219 00:11:07.991917 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.033116 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.033183 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.033209 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.033239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.033261 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.136133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.136207 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.136226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.136252 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.136271 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.239079 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.239164 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.239184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.239211 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.239230 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.341907 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.341955 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.341966 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.341982 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.341994 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.444802 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.444848 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.444859 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.444873 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.444884 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.547732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.547799 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.547814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.547835 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.547847 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.638963 5109 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.650386 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.650438 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.650457 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.650481 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.650500 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.753531 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.753599 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.753617 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.753670 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.753691 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.856246 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.856324 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.856343 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.856371 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.856391 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.958738 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.958828 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.958856 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.958888 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.958908 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:08Z","lastTransitionTime":"2026-02-19T00:11:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:08 crc kubenswrapper[5109]: I0219 00:11:08.991100 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:08 crc kubenswrapper[5109]: E0219 00:11:08.991286 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.061269 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.061333 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.061352 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.061374 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.061393 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.163230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.163305 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.163330 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.163357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.163380 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.266001 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.266076 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.266104 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.266132 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.266155 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.368912 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.368993 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.369014 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.369039 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.369058 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.473854 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.473916 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.473935 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.473962 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.473980 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.576858 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.576919 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.576939 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.576962 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.576981 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.680039 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.680123 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.680145 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.680173 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.680193 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.783501 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.783603 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.783628 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.783701 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.783733 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.886411 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.886547 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.886575 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.886608 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.886669 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.988804 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.988887 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.988911 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.988941 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.988965 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:09Z","lastTransitionTime":"2026-02-19T00:11:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.991052 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:09 crc kubenswrapper[5109]: E0219 00:11:09.991196 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.991247 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:09 crc kubenswrapper[5109]: I0219 00:11:09.991349 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:09 crc kubenswrapper[5109]: E0219 00:11:09.991495 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:09 crc kubenswrapper[5109]: E0219 00:11:09.992487 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.091533 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.091605 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.091624 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.091681 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.091700 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.194165 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.194239 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.194265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.194297 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.194321 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.297286 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.297347 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.297365 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.297390 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.297408 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.399415 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.399481 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.399500 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.399525 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.399544 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.502727 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.502796 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.502814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.502841 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.502859 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.606011 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.606084 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.606107 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.606134 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.606151 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.711691 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.711740 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.711753 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.711771 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.711782 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.814925 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.814983 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.814998 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.815019 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.815036 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.914903 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.914967 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.914986 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.915013 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.915031 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: E0219 00:11:10.932777 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.937714 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.937767 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.937785 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.937809 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.937826 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: E0219 00:11:10.953431 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.958201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.958284 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.958314 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.958349 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.958375 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: E0219 00:11:10.975907 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.980731 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.980795 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.980820 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.980850 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.980872 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:10Z","lastTransitionTime":"2026-02-19T00:11:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:10 crc kubenswrapper[5109]: I0219 00:11:10.991181 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:10 crc kubenswrapper[5109]: E0219 00:11:10.991377 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:10 crc kubenswrapper[5109]: E0219 00:11:10.999722 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.002967 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.003010 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.003020 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.003036 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.003045 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.009247 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.016719 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"e671bad5-2a36-4927-b785-4272497c90ae\\\",\\\"systemUUID\\\":\\\"6cf93e6e-89e8-4c26-9599-93db5625187a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.016966 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.018587 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.018674 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.018695 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.018723 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.018741 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.020419 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.038108 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.050439 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.060936 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.077615 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.092883 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.106411 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.121054 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.121131 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.121153 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.121178 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.121196 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.130923 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.144030 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.154419 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.164597 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.177521 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.185326 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.200553 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.208868 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.223452 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.223503 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.223516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.223535 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.223547 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.225878 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.238739 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.248107 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.332735 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.332816 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.332842 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.332877 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.332905 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.436003 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.436093 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.436121 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.436152 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.436171 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.539251 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.539355 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.539384 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.539419 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.539444 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.642026 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.642101 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.642121 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.642148 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.642168 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.744333 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.744410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.744422 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.744445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.744465 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.847440 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.847515 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.847535 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.847591 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.847613 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.950724 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.950797 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.950816 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.950843 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.950860 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:11Z","lastTransitionTime":"2026-02-19T00:11:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.990494 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.990586 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.990849 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.990827 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.991048 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.991216 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:11 crc kubenswrapper[5109]: I0219 00:11:11.992080 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:11:11 crc kubenswrapper[5109]: E0219 00:11:11.992371 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.053767 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.053856 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.053876 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.053897 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.053912 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.156154 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.156305 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.156335 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.156371 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.156433 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.258493 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.258590 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.258689 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.258722 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.258745 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.361362 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.361585 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.361616 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.361694 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.361734 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.464727 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.464795 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.464805 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.464829 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.464840 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.566708 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.566761 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.566776 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.566797 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.566812 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.669607 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.669731 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.669751 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.669779 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.669801 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.772272 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.772357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.772383 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.772410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.772429 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.874857 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.874924 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.874938 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.874957 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.874969 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.977447 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.977513 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.977530 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.977554 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.977572 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:12Z","lastTransitionTime":"2026-02-19T00:11:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:12 crc kubenswrapper[5109]: I0219 00:11:12.991005 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:12 crc kubenswrapper[5109]: E0219 00:11:12.991347 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.080353 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.080422 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.080441 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.080510 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.080532 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.182948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.183036 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.183064 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.183098 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.183127 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.285830 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.286557 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.286662 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.286711 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.286737 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.389660 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.389707 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.389717 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.389732 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.389742 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.404274 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4d8b6b92d82d118ff0387cb7069aa306f1f03b3338463602eecabccfd3dbecf2"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.404381 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"424542e8d167ebbfae509bed4325b624fbee571d68b88fcafc73f434e038a9c9"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.428170 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"45b69efd-a181-4847-9934-8ea00d53e9fd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8dwfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-htkb9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.445009 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.457198 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-cltq5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea82223b-3009-45c2-bf16-6037e4f81188\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-llz75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-cltq5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.474813 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6bb42c15-be29-463f-98ea-9bbf814bc554\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://c7f80b6ba65d561c8512c447557f13abbe70095634f461aa95685e9d1cbc64d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2fd0da03b7daee35f1cb445515a77c598acfbcaf37002cdc5c04320aa4a0d150\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3d7698a290363eeb698116e8d6e39de0eb74124d7044206235852ff95c4ca22d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.488611 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.492845 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.492915 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.492933 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.492958 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.492976 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.501468 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-ctz69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fvxzg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"multus-ctz69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.526435 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5d1ac293-9a27-42ee-b882-832ff39367d5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://aa122201c1a5a7e1eca25b47b167828ab94bf320c36120bb9c0cd165e74b3802\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1fd38e4d1a5fac78ab8465fa27ac6e131c905385cd4f2723c127e1dd477b7ecd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2f3a0d9923abbcf1ba9b07927bcf68b071130928242977dd2d62887a60697c09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://04f71f3ab827c2fb119a8b71a5f5f65b05d7ef7062abcafaf21d7b66315d6105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:36Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://681fa4abe25990e50a6eb3d708cacffca053808c7b70a95c61f72e58b9968d2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:35Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://140bb02f18062176cdb206b6e3a09a9f9d79322eb223cbd5e063d49eb29d9823\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9ed79e4b53ac7fb400d326ac6c83ade7d0ccafbfea157a992d43ef56474f5f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a9211e6c3f16b9f6926851fc5660c688908d76dcaca3cea7156c9333c2ebe5e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:34Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.537013 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.550422 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.560500 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-scmsj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d54tt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-scmsj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.568678 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5mc4c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-ntpdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.576813 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-bjs9p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42e68a30-b704-4b69-b682-602323a8ead0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mndtm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-bjs9p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.591675 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2955042f-e905-4bd8-893a-97e7c9723fca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kj2g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-bgfm9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.594878 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.594934 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.594944 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.594957 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.594966 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.604680 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"acc73639-5cae-4d42-8db7-8b5cb8c066e8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08d8d353ef1a99dd17c93ed684e737971d88184ba3bc0680b13d09c9e9141676\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e60411079c5460b17c619b5fec5fcf92720af7ee18bba7ce9ab847c64e4b09b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.628903 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f6b74d2e-e32f-4317-a051-fc2f98ac2928\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-19T00:10:36Z\\\",\\\"message\\\":\\\"439450 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0219 00:10:36.440278 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3078730297/tls.crt::/tmp/serving-cert-3078730297/tls.key\\\\\\\"\\\\nI0219 00:10:36.751214 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0219 00:10:36.752715 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0219 00:10:36.752732 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0219 00:10:36.752753 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0219 00:10:36.752758 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0219 00:10:36.755831 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0219 00:10:36.755849 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0219 00:10:36.755857 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0219 00:10:36.755861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0219 00:10:36.755864 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0219 00:10:36.755867 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0219 00:10:36.755881 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nI0219 00:10:36.759208 1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController\\\\nI0219 00:10:36.759327 1 shared_informer.go:350] \\\\\\\"Waiting for caches to sync\\\\\\\" controller=\\\\\\\"RequestHeaderAuthRequestController\\\\\\\"\\\\nF0219 00:10:36.759546 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-19T00:10:36Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.643785 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0974614b-47f6-4573-9fe9-070a9c87ed13\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:09:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://820801d53d40c930c0f082a48f8934bfd16e092537b6e145260a2f390eebee71\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cf7115e8fa2db7d4512172fbefab089cf700d74cd0dc769515bec456a6e96f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9e955f3e2d45d38652372a440b47b46d0a7fe9139b2bef91dabb9d4165ff7ad5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:09:33Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8cd082e87b60a6b72dd9fa882d42ac129a451ce1024f28837fe581b881b3e95b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-19T00:09:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-19T00:09:32Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:09:31Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.660883 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.674410 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:48Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-19T00:11:13Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4d8b6b92d82d118ff0387cb7069aa306f1f03b3338463602eecabccfd3dbecf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:11:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://424542e8d167ebbfae509bed4325b624fbee571d68b88fcafc73f434e038a9c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-19T00:11:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.684996 5109 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5a1c588b-414d-4d41-94a6-b74745ffd8c9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-19T00:10:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5gc7q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-19T00:10:47Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-9cp94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.697478 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.697532 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.697547 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.697566 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.697578 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.800265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.800325 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.800343 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.800369 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.800386 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.903084 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.903174 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.903193 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.903219 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.903237 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:13Z","lastTransitionTime":"2026-02-19T00:11:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.991034 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.991123 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:13 crc kubenswrapper[5109]: E0219 00:11:13.991233 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:13 crc kubenswrapper[5109]: I0219 00:11:13.991485 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:13 crc kubenswrapper[5109]: E0219 00:11:13.991530 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:13 crc kubenswrapper[5109]: E0219 00:11:13.991745 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.005815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.005956 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.005975 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.006049 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.006069 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.109019 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.109088 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.109107 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.109133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.109152 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.211321 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.211363 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.211373 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.211388 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.211399 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.313343 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.313424 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.313445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.313469 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.313492 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.410392 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerStarted","Data":"3b18b6cb9ecbc8f627e7ca3d1fc589f7dde96bae1b61f9b1967af28ce1998245"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.416726 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.416790 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.416812 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.416835 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.416853 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.462004 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=27.461976909 podStartE2EDuration="27.461976909s" podCreationTimestamp="2026-02-19 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.436538675 +0000 UTC m=+104.272778704" watchObservedRunningTime="2026-02-19 00:11:14.461976909 +0000 UTC m=+104.298216908" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.508253 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=27.508233394 podStartE2EDuration="27.508233394s" podCreationTimestamp="2026-02-19 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.488384411 +0000 UTC m=+104.324624430" watchObservedRunningTime="2026-02-19 00:11:14.508233394 +0000 UTC m=+104.344473463" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.518968 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.519022 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.519035 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.519053 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.519066 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.621734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.621795 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.621814 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.621837 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.621859 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.636464 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=27.636440623 podStartE2EDuration="27.636440623s" podCreationTimestamp="2026-02-19 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.6359804 +0000 UTC m=+104.472220449" watchObservedRunningTime="2026-02-19 00:11:14.636440623 +0000 UTC m=+104.472680622" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.723926 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.724017 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.724039 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.724066 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.724083 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.728822 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=27.728793749 podStartE2EDuration="27.728793749s" podCreationTimestamp="2026-02-19 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:14.72816703 +0000 UTC m=+104.564407019" watchObservedRunningTime="2026-02-19 00:11:14.728793749 +0000 UTC m=+104.565033758" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.825986 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.826037 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.826050 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.826068 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.826093 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.928212 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.928265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.928278 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.928295 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.928309 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:14Z","lastTransitionTime":"2026-02-19T00:11:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:14 crc kubenswrapper[5109]: I0219 00:11:14.990699 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:14 crc kubenswrapper[5109]: E0219 00:11:14.990949 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.030234 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.030584 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.030600 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.030616 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.030678 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.132881 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.132924 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.132933 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.132948 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.132958 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.235410 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.235453 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.235464 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.235479 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.235489 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.337203 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.337292 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.337314 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.337698 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.337721 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.416457 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerStarted","Data":"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.416561 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerStarted","Data":"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.419379 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="3b18b6cb9ecbc8f627e7ca3d1fc589f7dde96bae1b61f9b1967af28ce1998245" exitCode=0 Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.419530 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"3b18b6cb9ecbc8f627e7ca3d1fc589f7dde96bae1b61f9b1967af28ce1998245"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.422819 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" exitCode=0 Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.422887 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.436269 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podStartSLOduration=83.436247463 podStartE2EDuration="1m23.436247463s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:15.435818391 +0000 UTC m=+105.272058460" watchObservedRunningTime="2026-02-19 00:11:15.436247463 +0000 UTC m=+105.272487452" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.440231 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.440326 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.440357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.440392 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.440421 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.543888 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.543951 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.543969 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.543990 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.544003 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.646103 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.646143 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.646155 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.646173 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.646187 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.747943 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.747994 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.748006 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.748026 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.748039 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.850224 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.850266 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.850278 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.850294 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.850325 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.952586 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.952661 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.952676 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.952702 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.952717 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:15Z","lastTransitionTime":"2026-02-19T00:11:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.990613 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:15 crc kubenswrapper[5109]: E0219 00:11:15.990761 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.990845 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:15 crc kubenswrapper[5109]: E0219 00:11:15.991014 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:15 crc kubenswrapper[5109]: I0219 00:11:15.991189 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:15 crc kubenswrapper[5109]: E0219 00:11:15.991292 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.054687 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.054759 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.054790 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.054831 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.054844 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.156958 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.157013 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.157025 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.157041 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.157051 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.259234 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.259270 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.259279 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.259292 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.259301 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.362145 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.362208 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.362226 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.362253 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.362270 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.430129 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-bjs9p" event={"ID":"42e68a30-b704-4b69-b682-602323a8ead0","Type":"ContainerStarted","Data":"580f788b7811ed7a352a1c63784744b6d2c475cd59fadc261192dcc7df477524"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.432685 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"647dc914e480bc32e92052a56cc2baa2ac5cd793430ef6a5324b94b81f42e2b4"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.434815 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="dca5979790ab8c06361563bb21d2a340a83a53194b8762ca22cb2e176a15e33d" exitCode=0 Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.434868 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"dca5979790ab8c06361563bb21d2a340a83a53194b8762ca22cb2e176a15e33d"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442261 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442329 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442358 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442378 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442398 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.442413 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.464839 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.464896 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.464914 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.464938 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.464958 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.482391 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-bjs9p" podStartSLOduration=84.482365592 podStartE2EDuration="1m24.482365592s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:16.450828222 +0000 UTC m=+106.287068241" watchObservedRunningTime="2026-02-19 00:11:16.482365592 +0000 UTC m=+106.318605621" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.567339 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.567405 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.567423 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.567452 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.567470 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.669140 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.669189 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.669201 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.669219 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.669231 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.771623 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.771734 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.771760 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.771792 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.771813 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.874342 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.874403 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.874421 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.874445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.874470 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.977265 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.977345 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.977369 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.977398 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.977420 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:16Z","lastTransitionTime":"2026-02-19T00:11:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:16 crc kubenswrapper[5109]: I0219 00:11:16.990964 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:16 crc kubenswrapper[5109]: E0219 00:11:16.991145 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.079241 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.079503 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.079516 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.079533 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.079545 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.181491 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.181541 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.181554 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.181573 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.181583 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.284067 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.284110 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.284120 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.284133 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.284142 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.386130 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.386184 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.386197 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.386214 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.386227 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.448963 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-cltq5" event={"ID":"ea82223b-3009-45c2-bf16-6037e4f81188","Type":"ContainerStarted","Data":"1c4d09ef65301537a0751ad0d4df28d74833bf728d1c4b874d29549dbf34880c"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.452008 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"a9ab8824cf4bd581dd84c3ef125ba58652d8cd0614b1c7d9c10de4a3bc47b732"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.452084 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.455675 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="6fc6f5296ef8dd3519033cf276eb94b2b7cb18ac72806061ea13b8bc5c3a3c74" exitCode=0 Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.455769 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"6fc6f5296ef8dd3519033cf276eb94b2b7cb18ac72806061ea13b8bc5c3a3c74"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.470973 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-cltq5" podStartSLOduration=85.4709495 podStartE2EDuration="1m25.4709495s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.47024902 +0000 UTC m=+107.306489009" watchObservedRunningTime="2026-02-19 00:11:17.4709495 +0000 UTC m=+107.307189499" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.496956 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.497014 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.497034 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.497053 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.497066 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.497945 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podStartSLOduration=85.497913028 podStartE2EDuration="1m25.497913028s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:17.497446575 +0000 UTC m=+107.333686574" watchObservedRunningTime="2026-02-19 00:11:17.497913028 +0000 UTC m=+107.334153057" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.600833 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.600889 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.600906 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.600927 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.600941 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.707346 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.707392 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.707405 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.707423 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.707436 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.809230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.809282 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.809298 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.809316 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.809328 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.911763 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.911825 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.911846 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.911873 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.911891 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:17Z","lastTransitionTime":"2026-02-19T00:11:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.990941 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:17 crc kubenswrapper[5109]: E0219 00:11:17.991066 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.991140 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:17 crc kubenswrapper[5109]: E0219 00:11:17.991278 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:17 crc kubenswrapper[5109]: I0219 00:11:17.990952 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:17 crc kubenswrapper[5109]: E0219 00:11:17.991404 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.013746 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.013791 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.013801 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.013815 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.013825 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.117068 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.117112 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.117123 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.117141 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.117152 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.219567 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.219766 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.219776 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.219791 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.219800 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.321567 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.321611 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.321620 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.321645 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.321655 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.423833 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.423893 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.423911 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.423937 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.423954 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.460595 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="e39f860bd60665e527dc3843581ed741b3eee73452dd34215e76800ae5b00f23" exitCode=0 Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.460664 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"e39f860bd60665e527dc3843581ed741b3eee73452dd34215e76800ae5b00f23"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.526146 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.526474 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.526486 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.526504 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.526514 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.629016 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.629063 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.629112 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.629129 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.629140 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.730893 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.730935 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.730944 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.730959 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.730971 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.833106 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.833169 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.833188 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.833221 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.833238 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.935390 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.935427 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.935435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.935450 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.935459 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:18Z","lastTransitionTime":"2026-02-19T00:11:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:18 crc kubenswrapper[5109]: I0219 00:11:18.997801 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:18 crc kubenswrapper[5109]: E0219 00:11:18.997962 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.038842 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.039061 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.039080 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.039100 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.039113 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.140601 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.140658 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.140677 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.140693 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.140703 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.243357 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.243417 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.243429 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.243445 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.243459 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.346353 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.346412 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.346425 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.346448 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.346465 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.449355 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.449433 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.449451 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.449475 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.449517 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.466545 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"afe6dc0dc3cdf3ac1cff102b293820a7387b215d8bd60be0f6c4ca7303763fd6"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.469204 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ctz69" event={"ID":"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7","Type":"ContainerStarted","Data":"c36d18549c89f325a547d5d1938e591a3549ad096def50af8829a9adee3ac740"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.472850 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerStarted","Data":"e991b59bbcbdcd24fd07e086a9425466c9533b69f75ff9ed8a9746b147d68ffc"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.478266 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.533950 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-ctz69" podStartSLOduration=87.533914422 podStartE2EDuration="1m27.533914422s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:19.532997366 +0000 UTC m=+109.369237365" watchObservedRunningTime="2026-02-19 00:11:19.533914422 +0000 UTC m=+109.370154461" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.551859 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.551949 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.551979 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.552011 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.552038 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.655108 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.655203 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.655224 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.655256 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.655279 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.758230 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.758306 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.758327 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.758354 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.758375 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.855594 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.855778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.855900 5109 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.855928 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.855867793 +0000 UTC m=+141.692107812 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.855995 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.855969566 +0000 UTC m=+141.692209565 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.856042 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.856204 5109 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.856253 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.856244174 +0000 UTC m=+141.692484173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.861270 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.861317 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.861329 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.861350 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.861365 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.956803 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.956863 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.956892 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957079 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957102 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957115 5109 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957123 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957170 5109 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957179 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.957159696 +0000 UTC m=+141.793399685 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957189 5109 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957312 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.957274639 +0000 UTC m=+141.793514668 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957321 5109 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.957430 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs podName:4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc nodeName:}" failed. No retries permitted until 2026-02-19 00:11:51.957401063 +0000 UTC m=+141.793641092 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs") pod "network-metrics-daemon-scmsj" (UID: "4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.964299 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.964363 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.964402 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.964433 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.964456 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:19Z","lastTransitionTime":"2026-02-19T00:11:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.990939 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.990971 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:19 crc kubenswrapper[5109]: I0219 00:11:19.991002 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.991163 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.991344 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:19 crc kubenswrapper[5109]: E0219 00:11:19.991554 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.067240 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.067558 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.067578 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.067607 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.067667 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.170092 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.170157 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.170168 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.170189 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.170201 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.273300 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.273381 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.273404 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.273431 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.273451 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.376076 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.376161 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.376182 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.376209 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.376226 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.479030 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.479102 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.479120 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.479142 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.479165 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.492805 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"e991b59bbcbdcd24fd07e086a9425466c9533b69f75ff9ed8a9746b147d68ffc"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.493924 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="e991b59bbcbdcd24fd07e086a9425466c9533b69f75ff9ed8a9746b147d68ffc" exitCode=0 Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.582004 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.582063 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.582082 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.582106 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.582128 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.684381 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.684435 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.684456 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.684480 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.684498 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.786281 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.786332 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.786350 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.786377 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.786395 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.889546 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.889621 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.889678 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.889709 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.889727 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992514 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992553 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992564 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992581 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992595 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:20Z","lastTransitionTime":"2026-02-19T00:11:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:20 crc kubenswrapper[5109]: I0219 00:11:20.992657 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:20 crc kubenswrapper[5109]: E0219 00:11:20.992805 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.095393 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.095626 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.096056 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.097214 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.097758 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:21Z","lastTransitionTime":"2026-02-19T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.149478 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.149536 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.149549 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.149578 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.149592 5109 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-19T00:11:21Z","lastTransitionTime":"2026-02-19T00:11:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.201288 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n"] Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.413663 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.416509 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.417349 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.418814 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.418833 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.475159 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.475238 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.475274 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7759e17d-5f34-4fcd-b838-4b40730e45d5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.475547 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7759e17d-5f34-4fcd-b838-4b40730e45d5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.475683 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7759e17d-5f34-4fcd-b838-4b40730e45d5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.503192 5109 generic.go:358] "Generic (PLEG): container finished" podID="45b69efd-a181-4847-9934-8ea00d53e9fd" containerID="66f3220014b9620643acebd6557c0a2a5567ffe324de16d48ac2a6ea8e06e71b" exitCode=0 Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.503267 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerDied","Data":"66f3220014b9620643acebd6557c0a2a5567ffe324de16d48ac2a6ea8e06e71b"} Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.511068 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerStarted","Data":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.517041 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.527703 5109 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.540004 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.556660 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.576909 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577063 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577080 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577148 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7759e17d-5f34-4fcd-b838-4b40730e45d5-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577194 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7759e17d-5f34-4fcd-b838-4b40730e45d5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577314 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7759e17d-5f34-4fcd-b838-4b40730e45d5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.577382 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7759e17d-5f34-4fcd-b838-4b40730e45d5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.579064 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podStartSLOduration=89.57904714 podStartE2EDuration="1m29.57904714s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:21.568469645 +0000 UTC m=+111.404709674" watchObservedRunningTime="2026-02-19 00:11:21.57904714 +0000 UTC m=+111.415287149" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.581949 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7759e17d-5f34-4fcd-b838-4b40730e45d5-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.592903 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7759e17d-5f34-4fcd-b838-4b40730e45d5-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.612471 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7759e17d-5f34-4fcd-b838-4b40730e45d5-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-xw79n\" (UID: \"7759e17d-5f34-4fcd-b838-4b40730e45d5\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.732380 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.990375 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.990429 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:21 crc kubenswrapper[5109]: I0219 00:11:21.990380 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:21 crc kubenswrapper[5109]: E0219 00:11:21.990548 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:21 crc kubenswrapper[5109]: E0219 00:11:21.990722 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:21 crc kubenswrapper[5109]: E0219 00:11:21.990867 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.522261 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-htkb9" event={"ID":"45b69efd-a181-4847-9934-8ea00d53e9fd","Type":"ContainerStarted","Data":"088a0e5a650bfb111c91f5388743ca0772616d6bd1c6e8b4e32142aa628f8bd0"} Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.526713 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" event={"ID":"7759e17d-5f34-4fcd-b838-4b40730e45d5","Type":"ContainerStarted","Data":"e961365905d996803198c12776e44c647f7d13f025fecd7d1274a5e6a2f91630"} Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.526764 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" event={"ID":"7759e17d-5f34-4fcd-b838-4b40730e45d5","Type":"ContainerStarted","Data":"8c9279c7ff9f4ebb0f399ff02d535306b2f27b1d98be6a313d7ef8c5db5142f2"} Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.526804 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.527001 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.552490 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-htkb9" podStartSLOduration=90.55246266 podStartE2EDuration="1m30.55246266s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:22.551925624 +0000 UTC m=+112.388165633" watchObservedRunningTime="2026-02-19 00:11:22.55246266 +0000 UTC m=+112.388702649" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.566118 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.584672 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-xw79n" podStartSLOduration=90.584603787 podStartE2EDuration="1m30.584603787s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:22.575382481 +0000 UTC m=+112.411622490" watchObservedRunningTime="2026-02-19 00:11:22.584603787 +0000 UTC m=+112.420843776" Feb 19 00:11:22 crc kubenswrapper[5109]: I0219 00:11:22.990775 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:22 crc kubenswrapper[5109]: E0219 00:11:22.990967 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:23 crc kubenswrapper[5109]: I0219 00:11:23.626322 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-scmsj"] Feb 19 00:11:23 crc kubenswrapper[5109]: I0219 00:11:23.626475 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:23 crc kubenswrapper[5109]: E0219 00:11:23.626564 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:23 crc kubenswrapper[5109]: I0219 00:11:23.990383 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:23 crc kubenswrapper[5109]: I0219 00:11:23.990421 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:23 crc kubenswrapper[5109]: E0219 00:11:23.990509 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:23 crc kubenswrapper[5109]: E0219 00:11:23.990675 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:24 crc kubenswrapper[5109]: I0219 00:11:24.990853 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:24 crc kubenswrapper[5109]: E0219 00:11:24.990974 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:24 crc kubenswrapper[5109]: I0219 00:11:24.990866 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:24 crc kubenswrapper[5109]: I0219 00:11:24.991561 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:11:24 crc kubenswrapper[5109]: E0219 00:11:24.991567 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.551773 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.555727 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae"} Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.556258 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.589609 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=38.589577344 podStartE2EDuration="38.589577344s" podCreationTimestamp="2026-02-19 00:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:25.589172543 +0000 UTC m=+115.425412572" watchObservedRunningTime="2026-02-19 00:11:25.589577344 +0000 UTC m=+115.425817373" Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.990545 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:25 crc kubenswrapper[5109]: E0219 00:11:25.990718 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:25 crc kubenswrapper[5109]: I0219 00:11:25.990547 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:25 crc kubenswrapper[5109]: E0219 00:11:25.990969 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:26 crc kubenswrapper[5109]: I0219 00:11:26.990545 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:26 crc kubenswrapper[5109]: E0219 00:11:26.990835 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 19 00:11:26 crc kubenswrapper[5109]: I0219 00:11:26.991164 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:26 crc kubenswrapper[5109]: E0219 00:11:26.992081 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-scmsj" podUID="4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc" Feb 19 00:11:27 crc kubenswrapper[5109]: I0219 00:11:27.990380 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:27 crc kubenswrapper[5109]: E0219 00:11:27.990551 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 19 00:11:27 crc kubenswrapper[5109]: I0219 00:11:27.990585 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:27 crc kubenswrapper[5109]: E0219 00:11:27.990872 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.485270 5109 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.485556 5109 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.525439 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.532401 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vqhpb"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.532565 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.535410 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-tgx9p"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.535694 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.539073 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.539438 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.542155 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.543360 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.543533 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.543933 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.543964 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.544184 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.547247 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29524320-lgkhz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.547582 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.563400 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.565379 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.565587 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.575050 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.575596 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.575815 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.575888 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.576829 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577065 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577253 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577400 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577436 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577559 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.577865 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578600 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578747 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578786 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578841 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578876 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.578912 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579031 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579064 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579094 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579216 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579234 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579280 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579352 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579422 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579445 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579571 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.579774 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.580043 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.580207 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.580850 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.580067 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.584287 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.587283 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.589305 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-v8z7c"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.590416 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.592377 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.592907 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.596166 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.596209 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.596742 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-j969t"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.597050 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.597285 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.597997 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.601216 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.601737 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.602760 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.603346 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.603583 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.604108 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wtftk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.604434 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605192 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605395 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605509 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605625 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605722 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605798 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.605907 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606433 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606573 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606621 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606718 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606735 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606860 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.606980 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.607687 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j8qfk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.607799 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.608337 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.610519 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.631517 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-4d9db"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.649352 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.649669 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.650682 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.650934 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.651591 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.651768 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.651893 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.651996 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652039 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652115 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652217 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652245 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652306 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652365 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652398 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652441 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652478 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652498 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652584 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652651 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652711 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652592 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.652366 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.654302 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.657398 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.658667 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.659262 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.659342 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.660914 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.661574 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.661912 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.662915 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.666912 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667026 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667092 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667158 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667326 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667390 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667613 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.667735 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-rgj5z"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668173 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668264 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668455 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668654 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668861 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.668962 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.670028 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.671356 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.673774 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-slgm9"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675558 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-audit-dir\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675597 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xshs\" (UniqueName: \"kubernetes.io/projected/c65a4832-f511-4d14-8d80-25a2129b8e3a-kube-api-access-8xshs\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675648 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675695 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675713 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d3ff4f-4af6-4aec-a501-3e4995505046-available-featuregates\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675740 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675763 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6ks\" (UniqueName: \"kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675805 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd26dc84-70f4-4c4c-b03b-556651eba161-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675831 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675857 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5569fbd-3280-45ba-9b63-276c4a7a2b68-serving-cert\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675915 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c65a4832-f511-4d14-8d80-25a2129b8e3a-tmp-dir\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675942 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.675968 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676000 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-node-pullsecrets\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676024 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q9k6\" (UniqueName: \"kubernetes.io/projected/070d6fda-192f-47cb-b873-192e072ff078-kube-api-access-6q9k6\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676075 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676071 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676105 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676130 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676151 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmbb7\" (UniqueName: \"kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676174 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676211 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d97n\" (UniqueName: \"kubernetes.io/projected/45675682-2073-4412-90c7-940bf3274c7c-kube-api-access-7d97n\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676232 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676249 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-dir\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676281 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm9z8\" (UniqueName: \"kubernetes.io/projected/c5569fbd-3280-45ba-9b63-276c4a7a2b68-kube-api-access-vm9z8\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676310 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676332 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhvs9\" (UniqueName: \"kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676355 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m642n\" (UniqueName: \"kubernetes.io/projected/c4130b11-7b60-4ee2-a12b-b498e2944738-kube-api-access-m642n\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676378 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-config\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676403 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-auth-proxy-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676421 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-config\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676444 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676486 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c65a4832-f511-4d14-8d80-25a2129b8e3a-metrics-tls\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676586 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dm8tn\" (UniqueName: \"kubernetes.io/projected/2034b852-cb28-4233-a522-58ff1fb7945c-kube-api-access-dm8tn\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676617 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/070d6fda-192f-47cb-b873-192e072ff078-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676652 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2034b852-cb28-4233-a522-58ff1fb7945c-serving-cert\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676700 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb659\" (UniqueName: \"kubernetes.io/projected/e5d3ff4f-4af6-4aec-a501-3e4995505046-kube-api-access-cb659\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676721 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-policies\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676743 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-serving-cert\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676764 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-trusted-ca-bundle\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676785 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v679z\" (UniqueName: \"kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676817 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676844 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-serving-cert\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676869 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676896 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676920 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-audit\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676940 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676959 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afe49bd-6a2b-4685-802a-258fb115d254-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.676984 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.677012 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.678219 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-encryption-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.679107 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-image-import-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.679541 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-encryption-config\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686664 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pq25\" (UniqueName: \"kubernetes.io/projected/fd26dc84-70f4-4c4c-b03b-556651eba161-kube-api-access-7pq25\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686732 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d3ff4f-4af6-4aec-a501-3e4995505046-serving-cert\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686770 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686798 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-config\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686825 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686844 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0afe49bd-6a2b-4685-802a-258fb115d254-config\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686847 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686869 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686893 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686914 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686940 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686956 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn675\" (UniqueName: \"kubernetes.io/projected/0afe49bd-6a2b-4685-802a-258fb115d254-kube-api-access-cn675\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.686980 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwlkh\" (UniqueName: \"kubernetes.io/projected/6fe136ed-c904-47d5-8df2-13350ff341d9-kube-api-access-jwlkh\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687023 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-trusted-ca\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687058 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-client\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687078 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-serving-ca\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687172 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-images\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687259 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687483 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687537 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-client\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687564 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45675682-2073-4412-90c7-940bf3274c7c-machine-approver-tls\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687976 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.687993 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.692956 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.693005 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-58zqj"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.694278 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.696235 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.701887 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.704284 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.704836 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.706054 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.706207 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.714320 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.714536 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.715202 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.717886 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.718785 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.720443 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.720589 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.727798 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.728316 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.733169 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.737190 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.737478 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.738121 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.739840 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.739906 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.742624 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p2dmz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.742965 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.743103 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.746513 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.747104 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.752089 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-zhjpv"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.754589 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.758823 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.762866 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.763296 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.773835 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.775546 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.775896 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.779795 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.779909 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.782334 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.782571 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.785722 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.785968 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.787959 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.787983 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788222 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788258 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788276 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788293 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lmbb7\" (UniqueName: \"kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788309 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788325 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7d97n\" (UniqueName: \"kubernetes.io/projected/45675682-2073-4412-90c7-940bf3274c7c-kube-api-access-7d97n\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788342 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788356 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-dir\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788374 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788390 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg6md\" (UniqueName: \"kubernetes.io/projected/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-kube-api-access-zg6md\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788409 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vm9z8\" (UniqueName: \"kubernetes.io/projected/c5569fbd-3280-45ba-9b63-276c4a7a2b68-kube-api-access-vm9z8\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788425 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788450 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29790027-9f37-464a-aa38-74b8232996e9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788468 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnk4p\" (UniqueName: \"kubernetes.io/projected/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-kube-api-access-pnk4p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788532 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qhvs9\" (UniqueName: \"kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.788750 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789101 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-dir\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789219 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789256 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789297 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m642n\" (UniqueName: \"kubernetes.io/projected/c4130b11-7b60-4ee2-a12b-b498e2944738-kube-api-access-m642n\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789331 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-config\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789404 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-auth-proxy-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789441 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-config\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-srv-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.789492 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790145 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790206 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-auth-proxy-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790248 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790278 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790297 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c65a4832-f511-4d14-8d80-25a2129b8e3a-metrics-tls\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790335 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790383 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-config\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790387 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dm8tn\" (UniqueName: \"kubernetes.io/projected/2034b852-cb28-4233-a522-58ff1fb7945c-kube-api-access-dm8tn\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790448 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790468 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-serving-cert\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790548 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlz7\" (UniqueName: \"kubernetes.io/projected/d90a5916-ed50-483f-84e3-ec9e44da92f5-kube-api-access-mtlz7\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790577 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/070d6fda-192f-47cb-b873-192e072ff078-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790594 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2034b852-cb28-4233-a522-58ff1fb7945c-serving-cert\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790676 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790698 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790715 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-service-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790759 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cb659\" (UniqueName: \"kubernetes.io/projected/e5d3ff4f-4af6-4aec-a501-3e4995505046-kube-api-access-cb659\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790799 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-policies\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790815 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-serving-cert\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790831 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-trusted-ca-bundle\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790847 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v679z\" (UniqueName: \"kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790872 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-images\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790934 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790952 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-serving-cert\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.790989 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791009 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791027 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-audit\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791042 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791061 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afe49bd-6a2b-4685-802a-258fb115d254-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791078 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791100 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caabdbf4-9047-45d1-a1ae-84fee87393c9-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791226 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791251 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791290 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791310 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nb6g\" (UniqueName: \"kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791328 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791346 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd0078b7-6236-4b58-a64f-bcb5753c7a89-webhook-certs\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791373 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-encryption-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791393 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791411 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791428 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29790027-9f37-464a-aa38-74b8232996e9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791444 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-default-certificate\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.791460 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-metrics-certs\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.792691 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793005 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-config\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793017 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793100 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793506 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-audit\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793545 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-audit-policies\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.793621 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.794181 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.794564 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-trusted-ca-bundle\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.794676 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796102 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-image-import-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796342 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-encryption-config\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796456 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sdrf\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-kube-api-access-6sdrf\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796488 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxv8\" (UniqueName: \"kubernetes.io/projected/753c6b93-7309-452f-b10c-8aa1c730a48a-kube-api-access-7dxv8\") pod \"downloads-747b44746d-rgj5z\" (UID: \"753c6b93-7309-452f-b10c-8aa1c730a48a\") " pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796567 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796625 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29790027-9f37-464a-aa38-74b8232996e9-config\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796666 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/29790027-9f37-464a-aa38-74b8232996e9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796891 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7pq25\" (UniqueName: \"kubernetes.io/projected/fd26dc84-70f4-4c4c-b03b-556651eba161-kube-api-access-7pq25\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796925 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d3ff4f-4af6-4aec-a501-3e4995505046-serving-cert\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796947 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796950 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/45675682-2073-4412-90c7-940bf3274c7c-config\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.796979 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797129 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/070d6fda-192f-47cb-b873-192e072ff078-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797134 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797230 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-config\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797293 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797391 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0afe49bd-6a2b-4685-802a-258fb115d254-config\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797453 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-config\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797379 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wtftk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797540 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tt7nq"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797568 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-image-import-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797616 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797766 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-config\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.797934 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2034b852-cb28-4233-a522-58ff1fb7945c-serving-cert\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798001 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798032 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798041 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-serving-cert\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798053 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798338 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0afe49bd-6a2b-4685-802a-258fb115d254-config\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798409 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798762 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.798998 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cn675\" (UniqueName: \"kubernetes.io/projected/0afe49bd-6a2b-4685-802a-258fb115d254-kube-api-access-cn675\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799098 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwlkh\" (UniqueName: \"kubernetes.io/projected/6fe136ed-c904-47d5-8df2-13350ff341d9-kube-api-access-jwlkh\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799143 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799302 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-trusted-ca\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799334 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90a5916-ed50-483f-84e3-ec9e44da92f5-service-ca-bundle\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799146 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799385 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-client\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799402 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-serving-ca\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799429 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-images\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799446 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-stats-auth\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799489 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799530 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/baf9561a-4502-4e7e-b9af-acb69d721496-tmpfs\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799550 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c65a4832-f511-4d14-8d80-25a2129b8e3a-metrics-tls\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799564 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsxq2\" (UniqueName: \"kubernetes.io/projected/bd0078b7-6236-4b58-a64f-bcb5753c7a89-kube-api-access-tsxq2\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799552 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799686 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799801 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-client\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799901 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45675682-2073-4412-90c7-940bf3274c7c-machine-approver-tls\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799942 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-audit-dir\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.799986 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8xshs\" (UniqueName: \"kubernetes.io/projected/c65a4832-f511-4d14-8d80-25a2129b8e3a-kube-api-access-8xshs\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800018 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800044 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800051 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caabdbf4-9047-45d1-a1ae-84fee87393c9-config\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800079 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800104 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d3ff4f-4af6-4aec-a501-3e4995505046-available-featuregates\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800132 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800161 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6ks\" (UniqueName: \"kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800189 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a07a6721-367c-4f7a-b6a6-0266df632216-tmp-dir\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800352 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnhlp\" (UniqueName: \"kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp\") pod \"migrator-866fcbc849-hfxtc\" (UID: \"8bf22cea-38f6-463c-97e7-b2a7feec536c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800406 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800417 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd26dc84-70f4-4c4c-b03b-556651eba161-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800440 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800532 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800543 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-audit-dir\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.800868 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801174 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/070d6fda-192f-47cb-b873-192e072ff078-images\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801270 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d3ff4f-4af6-4aec-a501-3e4995505046-available-featuregates\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801307 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-serving-ca\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801381 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801542 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5569fbd-3280-45ba-9b63-276c4a7a2b68-serving-cert\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801754 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-client\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801800 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf27f\" (UniqueName: \"kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801897 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801918 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.801993 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c65a4832-f511-4d14-8d80-25a2129b8e3a-tmp-dir\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802033 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802134 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802178 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802219 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5569fbd-3280-45ba-9b63-276c4a7a2b68-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802225 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caabdbf4-9047-45d1-a1ae-84fee87393c9-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802264 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802295 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-node-pullsecrets\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802323 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6q9k6\" (UniqueName: \"kubernetes.io/projected/070d6fda-192f-47cb-b873-192e072ff078-kube-api-access-6q9k6\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802369 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbt6n\" (UniqueName: \"kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802491 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c65a4832-f511-4d14-8d80-25a2129b8e3a-tmp-dir\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802577 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c4130b11-7b60-4ee2-a12b-b498e2944738-node-pullsecrets\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.802587 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.803326 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2034b852-cb28-4233-a522-58ff1fb7945c-trusted-ca\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.803373 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-encryption-config\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.803552 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d3ff4f-4af6-4aec-a501-3e4995505046-serving-cert\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.803837 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-encryption-config\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804141 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-lgkhz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804170 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j8qfk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804181 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vqhpb"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804182 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0afe49bd-6a2b-4685-802a-258fb115d254-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804190 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804223 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-serving-cert\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804257 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-rgj5z"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804278 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804288 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804295 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804309 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804258 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804322 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804384 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804416 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804429 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804439 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-v8z7c"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804448 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804457 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-tgx9p"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804473 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804492 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804503 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804512 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804511 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804522 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804535 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-trt7v"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.804538 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.805194 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6fe136ed-c904-47d5-8df2-13350ff341d9-etcd-client\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.805886 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c4130b11-7b60-4ee2-a12b-b498e2944738-etcd-client\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.805962 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.806648 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.806855 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5569fbd-3280-45ba-9b63-276c4a7a2b68-serving-cert\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.807290 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fd26dc84-70f4-4c4c-b03b-556651eba161-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.809083 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/45675682-2073-4412-90c7-940bf3274c7c-machine-approver-tls\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.810409 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817019 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p2dmz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817052 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817064 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817088 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817100 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817112 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817123 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817134 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-l48tx"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.817173 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819705 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-slgm9"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819729 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-zhjpv"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819740 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819751 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819760 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819770 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-trt7v"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819778 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819787 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-4d9db"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819799 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.819809 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-pnlfz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.820170 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.822028 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whng8"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.822140 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.824545 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pnlfz"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.824562 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whng8"] Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.824600 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.835561 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.856249 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.875206 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.894872 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903206 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903245 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903267 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29790027-9f37-464a-aa38-74b8232996e9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903288 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-default-certificate\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903303 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-metrics-certs\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903320 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sdrf\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-kube-api-access-6sdrf\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903349 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxv8\" (UniqueName: \"kubernetes.io/projected/753c6b93-7309-452f-b10c-8aa1c730a48a-kube-api-access-7dxv8\") pod \"downloads-747b44746d-rgj5z\" (UID: \"753c6b93-7309-452f-b10c-8aa1c730a48a\") " pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903366 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29790027-9f37-464a-aa38-74b8232996e9-config\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903381 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/29790027-9f37-464a-aa38-74b8232996e9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903402 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903421 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-config\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903443 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90a5916-ed50-483f-84e3-ec9e44da92f5-service-ca-bundle\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903461 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-stats-auth\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903482 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903498 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/baf9561a-4502-4e7e-b9af-acb69d721496-tmpfs\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903515 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tsxq2\" (UniqueName: \"kubernetes.io/projected/bd0078b7-6236-4b58-a64f-bcb5753c7a89-kube-api-access-tsxq2\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903551 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caabdbf4-9047-45d1-a1ae-84fee87393c9-config\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903572 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a07a6721-367c-4f7a-b6a6-0266df632216-tmp-dir\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903590 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnhlp\" (UniqueName: \"kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp\") pod \"migrator-866fcbc849-hfxtc\" (UID: \"8bf22cea-38f6-463c-97e7-b2a7feec536c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903626 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-client\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903659 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jf27f\" (UniqueName: \"kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903679 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903698 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903718 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caabdbf4-9047-45d1-a1ae-84fee87393c9-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903735 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903755 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt6n\" (UniqueName: \"kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903782 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903801 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zg6md\" (UniqueName: \"kubernetes.io/projected/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-kube-api-access-zg6md\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903820 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29790027-9f37-464a-aa38-74b8232996e9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903837 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pnk4p\" (UniqueName: \"kubernetes.io/projected/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-kube-api-access-pnk4p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903853 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903871 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903892 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-srv-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903914 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903929 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-serving-cert\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903944 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlz7\" (UniqueName: \"kubernetes.io/projected/d90a5916-ed50-483f-84e3-ec9e44da92f5-kube-api-access-mtlz7\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903960 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903976 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.903993 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-service-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.904586 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/29790027-9f37-464a-aa38-74b8232996e9-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.904662 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/caabdbf4-9047-45d1-a1ae-84fee87393c9-config\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.904780 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-images\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905130 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905278 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-tmp\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905390 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caabdbf4-9047-45d1-a1ae-84fee87393c9-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905465 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a07a6721-367c-4f7a-b6a6-0266df632216-tmp-dir\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905201 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905484 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905802 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905841 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nb6g\" (UniqueName: \"kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905863 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905881 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd0078b7-6236-4b58-a64f-bcb5753c7a89-webhook-certs\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.905933 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/baf9561a-4502-4e7e-b9af-acb69d721496-tmpfs\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.906066 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.906451 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.906676 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.906810 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/caabdbf4-9047-45d1-a1ae-84fee87393c9-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.909524 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/caabdbf4-9047-45d1-a1ae-84fee87393c9-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.910735 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.915731 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.935517 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.939656 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-client\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.955600 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.975202 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.980591 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a07a6721-367c-4f7a-b6a6-0266df632216-serving-cert\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.990519 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.990568 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.994832 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 19 00:11:28 crc kubenswrapper[5109]: I0219 00:11:28.995839 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-config\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.015952 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.035294 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.036996 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.055457 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.065113 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a07a6721-367c-4f7a-b6a6-0266df632216-etcd-service-ca\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.075812 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.084460 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29790027-9f37-464a-aa38-74b8232996e9-config\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.094934 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.109547 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29790027-9f37-464a-aa38-74b8232996e9-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.115605 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.134574 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.154690 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.169292 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-default-certificate\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.175143 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.194910 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.215271 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.230070 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-metrics-certs\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.235240 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.247204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d90a5916-ed50-483f-84e3-ec9e44da92f5-service-ca-bundle\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.255938 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.260829 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d90a5916-ed50-483f-84e3-ec9e44da92f5-stats-auth\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.275841 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.295096 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.314617 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.334900 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.339551 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.341501 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.354679 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.365950 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.375565 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.395482 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.415027 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.434880 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.455367 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.485308 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.495692 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.516539 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.535651 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.555075 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.575997 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.582587 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.595686 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.611749 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.615473 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.635661 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.655156 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.657153 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-config\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.676055 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.682092 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/baf9561a-4502-4e7e-b9af-acb69d721496-srv-cert\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.695879 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.715956 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.735668 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.738006 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-images\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.753248 5109 request.go:752] "Waited before sending request" delay="1.009437559s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.755584 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.762255 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.775795 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.796267 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.801349 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bd0078b7-6236-4b58-a64f-bcb5753c7a89-webhook-certs\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.816512 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.856171 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.875910 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.895155 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.915209 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.936063 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.955235 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.976001 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.990458 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.990560 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:29 crc kubenswrapper[5109]: I0219 00:11:29.995656 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.015402 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.036066 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.065165 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.075920 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.095216 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.115437 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.134845 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.155748 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.176086 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.196035 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.215079 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.255837 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.276052 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.296356 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.301450 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmbb7\" (UniqueName: \"kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7\") pod \"image-pruner-29524320-lgkhz\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.317749 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.334575 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.373403 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhvs9\" (UniqueName: \"kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9\") pod \"controller-manager-65b6cccf98-mxvtz\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.393919 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm9z8\" (UniqueName: \"kubernetes.io/projected/c5569fbd-3280-45ba-9b63-276c4a7a2b68-kube-api-access-vm9z8\") pod \"authentication-operator-7f5c659b84-nqqjk\" (UID: \"c5569fbd-3280-45ba-9b63-276c4a7a2b68\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.411417 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.415503 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d97n\" (UniqueName: \"kubernetes.io/projected/45675682-2073-4412-90c7-940bf3274c7c-kube-api-access-7d97n\") pod \"machine-approver-54c688565-j969t\" (UID: \"45675682-2073-4412-90c7-940bf3274c7c\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.440113 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m642n\" (UniqueName: \"kubernetes.io/projected/c4130b11-7b60-4ee2-a12b-b498e2944738-kube-api-access-m642n\") pod \"apiserver-9ddfb9f55-tgx9p\" (UID: \"c4130b11-7b60-4ee2-a12b-b498e2944738\") " pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.453044 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dm8tn\" (UniqueName: \"kubernetes.io/projected/2034b852-cb28-4233-a522-58ff1fb7945c-kube-api-access-dm8tn\") pod \"console-operator-67c89758df-v8z7c\" (UID: \"2034b852-cb28-4233-a522-58ff1fb7945c\") " pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.459465 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.467822 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.472501 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb659\" (UniqueName: \"kubernetes.io/projected/e5d3ff4f-4af6-4aec-a501-3e4995505046-kube-api-access-cb659\") pod \"openshift-config-operator-5777786469-wtftk\" (UID: \"e5d3ff4f-4af6-4aec-a501-3e4995505046\") " pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.488753 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v679z\" (UniqueName: \"kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z\") pod \"route-controller-manager-776cdc94d6-56tjh\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.494657 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.511424 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.515780 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.518361 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pq25\" (UniqueName: \"kubernetes.io/projected/fd26dc84-70f4-4c4c-b03b-556651eba161-kube-api-access-7pq25\") pod \"cluster-samples-operator-6b564684c8-7lfng\" (UID: \"fd26dc84-70f4-4c4c-b03b-556651eba161\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.519795 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.536284 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.556155 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.575427 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.596393 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" event={"ID":"45675682-2073-4412-90c7-940bf3274c7c","Type":"ContainerStarted","Data":"b3406ffd4777ea32afc73ad5d393918640507556d6eee898f746a0b2fca80b4d"} Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.615766 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn675\" (UniqueName: \"kubernetes.io/projected/0afe49bd-6a2b-4685-802a-258fb115d254-kube-api-access-cn675\") pod \"openshift-apiserver-operator-846cbfc458-4tgzn\" (UID: \"0afe49bd-6a2b-4685-802a-258fb115d254\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.634397 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwlkh\" (UniqueName: \"kubernetes.io/projected/6fe136ed-c904-47d5-8df2-13350ff341d9-kube-api-access-jwlkh\") pod \"apiserver-8596bd845d-5hvvj\" (UID: \"6fe136ed-c904-47d5-8df2-13350ff341d9\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.652404 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xshs\" (UniqueName: \"kubernetes.io/projected/c65a4832-f511-4d14-8d80-25a2129b8e3a-kube-api-access-8xshs\") pod \"dns-operator-799b87ffcd-j8qfk\" (UID: \"c65a4832-f511-4d14-8d80-25a2129b8e3a\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.671242 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6ks\" (UniqueName: \"kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks\") pod \"oauth-openshift-66458b6674-nsncq\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.677166 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.695884 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.697080 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q9k6\" (UniqueName: \"kubernetes.io/projected/070d6fda-192f-47cb-b873-192e072ff078-kube-api-access-6q9k6\") pod \"machine-api-operator-755bb95488-vqhpb\" (UID: \"070d6fda-192f-47cb-b873-192e072ff078\") " pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.703267 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.706736 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29524320-lgkhz"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.710401 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.711597 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.718025 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-v8z7c"] Feb 19 00:11:30 crc kubenswrapper[5109]: W0219 00:11:30.719528 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46cb4d4a_e24c_4036_8369_78813ade70e6.slice/crio-80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0 WatchSource:0}: Error finding container 80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0: Status 404 returned error can't find the container with id 80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0 Feb 19 00:11:30 crc kubenswrapper[5109]: W0219 00:11:30.730026 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2034b852_cb28_4233_a522_58ff1fb7945c.slice/crio-589d2269deec4d5769ba2d6c405e67e0d56ea3f53ed8198b622d30b30a02dbcd WatchSource:0}: Error finding container 589d2269deec4d5769ba2d6c405e67e0d56ea3f53ed8198b622d30b30a02dbcd: Status 404 returned error can't find the container with id 589d2269deec4d5769ba2d6c405e67e0d56ea3f53ed8198b622d30b30a02dbcd Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.734232 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.749147 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-wtftk"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.749608 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.754768 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.774449 5109 request.go:752] "Waited before sending request" delay="1.955816459s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-kpvmz&limit=500&resourceVersion=0" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.776132 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.785138 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.794952 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.808727 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.815715 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.828978 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.835717 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.849401 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.855076 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.875376 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.877142 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 19 00:11:30 crc kubenswrapper[5109]: W0219 00:11:30.885372 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fe136ed_c904_47d5_8df2_13350ff341d9.slice/crio-5d7e2b418b724f061bbd64dbacc756eb05d10f3d3f1f82dd41215316cc03968c WatchSource:0}: Error finding container 5d7e2b418b724f061bbd64dbacc756eb05d10f3d3f1f82dd41215316cc03968c: Status 404 returned error can't find the container with id 5d7e2b418b724f061bbd64dbacc756eb05d10f3d3f1f82dd41215316cc03968c Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.895457 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.916669 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.937998 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-tgx9p"] Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.942913 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.957333 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 19 00:11:30 crc kubenswrapper[5109]: W0219 00:11:30.975491 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4130b11_7b60_4ee2_a12b_b498e2944738.slice/crio-3b5cbb0d896b8c29d09b7d2fbe0bfe59fb57088e7de08199c803a7ea6078b92c WatchSource:0}: Error finding container 3b5cbb0d896b8c29d09b7d2fbe0bfe59fb57088e7de08199c803a7ea6078b92c: Status 404 returned error can't find the container with id 3b5cbb0d896b8c29d09b7d2fbe0bfe59fb57088e7de08199c803a7ea6078b92c Feb 19 00:11:30 crc kubenswrapper[5109]: I0219 00:11:30.975806 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.000816 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.020387 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsxq2\" (UniqueName: \"kubernetes.io/projected/bd0078b7-6236-4b58-a64f-bcb5753c7a89-kube-api-access-tsxq2\") pod \"multus-admission-controller-69db94689b-p2dmz\" (UID: \"bd0078b7-6236-4b58-a64f-bcb5753c7a89\") " pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.040808 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.052618 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sdrf\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-kube-api-access-6sdrf\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.064683 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.064727 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.073260 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zg6md\" (UniqueName: \"kubernetes.io/projected/50579d9d-c5d2-4f39-9a96-39cbd4ee8976-kube-api-access-zg6md\") pod \"machine-config-operator-67c9d58cbb-mtsdx\" (UID: \"50579d9d-c5d2-4f39-9a96-39cbd4ee8976\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.089482 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnk4p\" (UniqueName: \"kubernetes.io/projected/dbf7d8d7-ef76-4af8-bc7e-91149dd703cf-kube-api-access-pnk4p\") pod \"control-plane-machine-set-operator-75ffdb6fcd-qpwhk\" (UID: \"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.090463 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-j8qfk"] Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.095291 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0afe49bd_6a2b_4685_802a_258fb115d254.slice/crio-51bcacb967dbd28885204a804f3e775efe26f8cc628e74ce8b57548aa7d4f41d WatchSource:0}: Error finding container 51bcacb967dbd28885204a804f3e775efe26f8cc628e74ce8b57548aa7d4f41d: Status 404 returned error can't find the container with id 51bcacb967dbd28885204a804f3e775efe26f8cc628e74ce8b57548aa7d4f41d Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.113907 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d3c8fb21-9805-4b45-b5f4-0e5f1fb80351-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-5464h\" (UID: \"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.276095 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.296256 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.315329 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.335500 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358187 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bjjv\" (UniqueName: \"kubernetes.io/projected/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-kube-api-access-9bjjv\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358220 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss62t\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358270 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358287 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358308 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358323 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358352 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-tmpfs\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.358370 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-console-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359487 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/efdaca02-411e-4c67-adec-db205b4e67cf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359551 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359624 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68r6c\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359709 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359736 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-trusted-ca-bundle\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359818 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-srv-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359853 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359908 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359945 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-oauth-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.359986 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdaca02-411e-4c67-adec-db205b4e67cf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360021 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r8wc\" (UniqueName: \"kubernetes.io/projected/7169447a-e4aa-4492-99f3-0d21fe813f69-kube-api-access-7r8wc\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360047 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360157 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360190 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360253 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x42g4\" (UniqueName: \"kubernetes.io/projected/efdaca02-411e-4c67-adec-db205b4e67cf-kube-api-access-x42g4\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360359 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360387 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-service-ca\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.360476 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-oauth-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.363787 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:31.863769413 +0000 UTC m=+121.700009392 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.378834 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.395450 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.435886 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.439556 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.455884 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.461782 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.461986 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:31.961952096 +0000 UTC m=+121.798192085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462558 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462587 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-service-ca\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462610 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfngp\" (UniqueName: \"kubernetes.io/projected/8e46cdbd-071c-446c-bee4-462001f9ef85-kube-api-access-tfngp\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462648 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462670 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462686 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vbc\" (UniqueName: \"kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462752 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-oauth-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462812 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462851 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-apiservice-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462894 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bjjv\" (UniqueName: \"kubernetes.io/projected/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-kube-api-access-9bjjv\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462938 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ss62t\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462940 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/29790027-9f37-464a-aa38-74b8232996e9-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-kvzlc\" (UID: \"29790027-9f37-464a-aa38-74b8232996e9\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.462965 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ktrg\" (UniqueName: \"kubernetes.io/projected/859e96d6-c432-4486-9efc-9e57147a0cdc-kube-api-access-8ktrg\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463444 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-cabundle\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463491 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67km\" (UniqueName: \"kubernetes.io/projected/d2e6c049-ef77-4bad-ab30-b499a7850c20-kube-api-access-p67km\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463550 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463605 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463798 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tmn8\" (UniqueName: \"kubernetes.io/projected/decc90f6-d956-4221-b02d-e2e28b9f307a-kube-api-access-7tmn8\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463862 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463881 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463914 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffa4ee7c-f211-40a7-ae2d-8996d8533102-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.463979 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-tmpfs\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464021 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwj6g\" (UniqueName: \"kubernetes.io/projected/3d7fffb6-c104-482f-8c6a-33b3dd961b62-kube-api-access-zwj6g\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464066 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-console-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464124 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464364 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/efdaca02-411e-4c67-adec-db205b4e67cf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464400 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-plugins-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464432 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464466 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464490 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464498 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464499 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-tmpfs\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464544 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffa4ee7c-f211-40a7-ae2d-8996d8533102-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464682 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtgm4\" (UniqueName: \"kubernetes.io/projected/5e797401-b4ca-4489-9d49-5c3d32bd20e6-kube-api-access-gtgm4\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464747 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-68r6c\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464779 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr6gc\" (UniqueName: \"kubernetes.io/projected/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-kube-api-access-rr6gc\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464898 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4ee7c-f211-40a7-ae2d-8996d8533102-config\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464920 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d7fffb6-c104-482f-8c6a-33b3dd961b62-tmpfs\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464950 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464970 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-trusted-ca-bundle\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.464991 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-certs\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465008 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8cg2\" (UniqueName: \"kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465035 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tcd6\" (UniqueName: \"kubernetes.io/projected/d0b307e4-b2bd-4498-be5e-38320e2b1350-kube-api-access-9tcd6\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465150 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4ee7c-f211-40a7-ae2d-8996d8533102-serving-cert\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465233 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27tzc\" (UniqueName: \"kubernetes.io/projected/691b2ad9-f837-4d45-a2bb-b99130bad14f-kube-api-access-27tzc\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465355 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-socket-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465423 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465586 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-srv-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465619 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465669 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465889 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465929 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-oauth-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465961 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdaca02-411e-4c67-adec-db205b4e67cf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.465986 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/691b2ad9-f837-4d45-a2bb-b99130bad14f-cert\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466018 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b307e4-b2bd-4498-be5e-38320e2b1350-config-volume\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466059 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-key\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466097 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466303 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7r8wc\" (UniqueName: \"kubernetes.io/projected/7169447a-e4aa-4492-99f3-0d21fe813f69-kube-api-access-7r8wc\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466513 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466561 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e46cdbd-071c-446c-bee4-462001f9ef85-config\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466590 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0b307e4-b2bd-4498-be5e-38320e2b1350-metrics-tls\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466766 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466819 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466861 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-mountpoint-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466888 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466931 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.466960 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0b307e4-b2bd-4498-be5e-38320e2b1350-tmp-dir\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467009 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-csi-data-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467033 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.467080 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:31.967059424 +0000 UTC m=+121.803299513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467120 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-webhook-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467296 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x42g4\" (UniqueName: \"kubernetes.io/projected/efdaca02-411e-4c67-adec-db205b4e67cf-kube-api-access-x42g4\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467355 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467397 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-registration-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.467910 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/efdaca02-411e-4c67-adec-db205b4e67cf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.469409 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-profile-collector-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.474483 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.478904 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.496245 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.512980 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d2ad403f-3bd2-4b56-8b7a-60ea6b409f91-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-2zvq6\" (UID: \"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.516061 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.534905 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.556015 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.560818 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.570274 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.570437 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-apiservice-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.570481 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.070447797 +0000 UTC m=+121.906687796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.570670 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8ktrg\" (UniqueName: \"kubernetes.io/projected/859e96d6-c432-4486-9efc-9e57147a0cdc-kube-api-access-8ktrg\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.570915 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-cabundle\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.570953 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p67km\" (UniqueName: \"kubernetes.io/projected/d2e6c049-ef77-4bad-ab30-b499a7850c20-kube-api-access-p67km\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571044 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7tmn8\" (UniqueName: \"kubernetes.io/projected/decc90f6-d956-4221-b02d-e2e28b9f307a-kube-api-access-7tmn8\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571324 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffa4ee7c-f211-40a7-ae2d-8996d8533102-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571404 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zwj6g\" (UniqueName: \"kubernetes.io/projected/3d7fffb6-c104-482f-8c6a-33b3dd961b62-kube-api-access-zwj6g\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571457 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571507 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-plugins-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571547 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571593 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571707 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-plugins-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571711 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571723 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/ffa4ee7c-f211-40a7-ae2d-8996d8533102-tmp-dir\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571820 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffa4ee7c-f211-40a7-ae2d-8996d8533102-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571857 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gtgm4\" (UniqueName: \"kubernetes.io/projected/5e797401-b4ca-4489-9d49-5c3d32bd20e6-kube-api-access-gtgm4\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571905 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rr6gc\" (UniqueName: \"kubernetes.io/projected/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-kube-api-access-rr6gc\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571929 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4ee7c-f211-40a7-ae2d-8996d8533102-config\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571950 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d7fffb6-c104-482f-8c6a-33b3dd961b62-tmpfs\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.571984 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-certs\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572027 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p8cg2\" (UniqueName: \"kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572107 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9tcd6\" (UniqueName: \"kubernetes.io/projected/d0b307e4-b2bd-4498-be5e-38320e2b1350-kube-api-access-9tcd6\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572135 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4ee7c-f211-40a7-ae2d-8996d8533102-serving-cert\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572157 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-27tzc\" (UniqueName: \"kubernetes.io/projected/691b2ad9-f837-4d45-a2bb-b99130bad14f-kube-api-access-27tzc\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572181 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572229 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-socket-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572259 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572336 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/691b2ad9-f837-4d45-a2bb-b99130bad14f-cert\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572360 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b307e4-b2bd-4498-be5e-38320e2b1350-config-volume\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572383 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-key\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572396 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3d7fffb6-c104-482f-8c6a-33b3dd961b62-tmpfs\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572404 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572445 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e46cdbd-071c-446c-bee4-462001f9ef85-config\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572487 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0b307e4-b2bd-4498-be5e-38320e2b1350-metrics-tls\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572538 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572562 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572618 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-mountpoint-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572681 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572795 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0b307e4-b2bd-4498-be5e-38320e2b1350-tmp-dir\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572824 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-csi-data-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572840 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-webhook-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572879 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.572907 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.072891678 +0000 UTC m=+121.909131747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572940 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-registration-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.572989 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tfngp\" (UniqueName: \"kubernetes.io/projected/8e46cdbd-071c-446c-bee4-462001f9ef85-kube-api-access-tfngp\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573017 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573038 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573061 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-24vbc\" (UniqueName: \"kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573118 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573125 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573193 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0b307e4-b2bd-4498-be5e-38320e2b1350-tmp-dir\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573266 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-socket-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573284 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-csi-data-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573391 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-mountpoint-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.573525 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/decc90f6-d956-4221-b02d-e2e28b9f307a-registration-dir\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.578491 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.580155 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/691b2ad9-f837-4d45-a2bb-b99130bad14f-cert\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.595449 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.600585 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-vqhpb"] Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.610335 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod070d6fda_192f_47cb_b873_192e072ff078.slice/crio-1ddd2624a039d6ed03a02c199a89f6cbe9e63632d568f4609af72d641ab326c5 WatchSource:0}: Error finding container 1ddd2624a039d6ed03a02c199a89f6cbe9e63632d568f4609af72d641ab326c5: Status 404 returned error can't find the container with id 1ddd2624a039d6ed03a02c199a89f6cbe9e63632d568f4609af72d641ab326c5 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.610830 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-lgkhz" event={"ID":"46cb4d4a-e24c-4036-8369-78813ade70e6","Type":"ContainerStarted","Data":"0417d75210cbff694af95a4c921c670c929487a885abaad04f691988fabbfe10"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.610873 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-lgkhz" event={"ID":"46cb4d4a-e24c-4036-8369-78813ade70e6","Type":"ContainerStarted","Data":"80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.613273 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" event={"ID":"78decf6c-6b41-4e23-ae33-af1fc7cab261","Type":"ContainerStarted","Data":"681436cc0af4d6ac2a715c58a7929773fcb13218e288b4536ee0a2468ba28be2"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.613315 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" event={"ID":"78decf6c-6b41-4e23-ae33-af1fc7cab261","Type":"ContainerStarted","Data":"1f59c360eaa12d095d8a828a5d985de328535cb20baeed029758b61c2670000d"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.613467 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.614442 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.615420 5109 generic.go:358] "Generic (PLEG): container finished" podID="c4130b11-7b60-4ee2-a12b-b498e2944738" containerID="bbd178df064d8dfd4b3b04cec78bf8ff0eef0371a3a856ae124b9dde852972d9" exitCode=0 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.615485 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" event={"ID":"c4130b11-7b60-4ee2-a12b-b498e2944738","Type":"ContainerDied","Data":"bbd178df064d8dfd4b3b04cec78bf8ff0eef0371a3a856ae124b9dde852972d9"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.615506 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" event={"ID":"c4130b11-7b60-4ee2-a12b-b498e2944738","Type":"ContainerStarted","Data":"3b5cbb0d896b8c29d09b7d2fbe0bfe59fb57088e7de08199c803a7ea6078b92c"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.617222 5109 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mxvtz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.617265 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.617651 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" event={"ID":"0afe49bd-6a2b-4685-802a-258fb115d254","Type":"ContainerStarted","Data":"df14283158b05bc17a6966ac56dd9564d246d8f078eba27e2466c29ecfe25d39"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.617677 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" event={"ID":"0afe49bd-6a2b-4685-802a-258fb115d254","Type":"ContainerStarted","Data":"51bcacb967dbd28885204a804f3e775efe26f8cc628e74ce8b57548aa7d4f41d"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.622595 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" event={"ID":"34503362-be2b-40ee-be2f-cdf7da7baa6f","Type":"ContainerStarted","Data":"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.622662 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" event={"ID":"34503362-be2b-40ee-be2f-cdf7da7baa6f","Type":"ContainerStarted","Data":"a5dad433e423334f1740ba0b8db0c842746df176fbd48179e8b79ab1ac8cc23e"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.627985 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" event={"ID":"fd26dc84-70f4-4c4c-b03b-556651eba161","Type":"ContainerStarted","Data":"a9877c4c6cda877a3b614fd88fa46bb3ae42eece537569509eceaa499f029425"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.628028 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" event={"ID":"fd26dc84-70f4-4c4c-b03b-556651eba161","Type":"ContainerStarted","Data":"e2db1e9981fc5774941b28624e5a2a24dcee017eb075a64c10fc9bf4c54e9ebb"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.629425 5109 generic.go:358] "Generic (PLEG): container finished" podID="e5d3ff4f-4af6-4aec-a501-3e4995505046" containerID="eb5a9ba385d65eeb3f41cc6e51a254ec05f3c83932b9f2b43549b2c475042784" exitCode=0 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.629489 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" event={"ID":"e5d3ff4f-4af6-4aec-a501-3e4995505046","Type":"ContainerDied","Data":"eb5a9ba385d65eeb3f41cc6e51a254ec05f3c83932b9f2b43549b2c475042784"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.629506 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" event={"ID":"e5d3ff4f-4af6-4aec-a501-3e4995505046","Type":"ContainerStarted","Data":"8676c456fe9a1390d571f06ae7bd6617316a8828c1412381dcf49da3207b2571"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.631299 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" event={"ID":"ffac205b-047e-4cf8-bcc5-39a818ee5655","Type":"ContainerStarted","Data":"1eaf03126aca033072718f7ea3256d48c27efdc7dd974e0a75daddb5da63a012"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.633481 5109 generic.go:358] "Generic (PLEG): container finished" podID="6fe136ed-c904-47d5-8df2-13350ff341d9" containerID="5176612b2cd431869abe1f6d6dd5033b9c8540384e89b7675d4c28a676280600" exitCode=0 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.634050 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" event={"ID":"6fe136ed-c904-47d5-8df2-13350ff341d9","Type":"ContainerDied","Data":"5176612b2cd431869abe1f6d6dd5033b9c8540384e89b7675d4c28a676280600"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.634068 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" event={"ID":"6fe136ed-c904-47d5-8df2-13350ff341d9","Type":"ContainerStarted","Data":"5d7e2b418b724f061bbd64dbacc756eb05d10f3d3f1f82dd41215316cc03968c"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.634713 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.636392 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" event={"ID":"c65a4832-f511-4d14-8d80-25a2129b8e3a","Type":"ContainerStarted","Data":"e922fb7aa8ba516da09e83c37982e2b9e382ca3f846a4df475082f33627c73b8"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.638540 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" event={"ID":"45675682-2073-4412-90c7-940bf3274c7c","Type":"ContainerStarted","Data":"f9f31c4cf6119eb8b5d535e1e17b2c20cd7102358fcda25b3e80cfee1042d1ff"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.643463 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.644469 5109 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-56tjh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.644536 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.645066 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" event={"ID":"2034b852-cb28-4233-a522-58ff1fb7945c","Type":"ContainerStarted","Data":"ba0e4f07bd9f735284429feab2b2b6da7fb12d5af98105c86b386bf129b38b94"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.645092 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" event={"ID":"2034b852-cb28-4233-a522-58ff1fb7945c","Type":"ContainerStarted","Data":"589d2269deec4d5769ba2d6c405e67e0d56ea3f53ed8198b622d30b30a02dbcd"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.646771 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" event={"ID":"c5569fbd-3280-45ba-9b63-276c4a7a2b68","Type":"ContainerStarted","Data":"f3cf268c24c6238c559bfda07269c09504441d66085de7a63a82d1eecb8dd487"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.646795 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" event={"ID":"c5569fbd-3280-45ba-9b63-276c4a7a2b68","Type":"ContainerStarted","Data":"89dcd67c50209608d14d0dcb4a53d009cd4e6698fb3a3708c672cc5aabe695a1"} Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.655331 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.660701 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-p2dmz"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.661618 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.672905 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd0078b7_6236_4b58_a64f_bcb5753c7a89.slice/crio-062f9baf4867b1a8cd69d839822c4ebe748d48313fa2a904617b0992be7da0ac WatchSource:0}: Error finding container 062f9baf4867b1a8cd69d839822c4ebe748d48313fa2a904617b0992be7da0ac: Status 404 returned error can't find the container with id 062f9baf4867b1a8cd69d839822c4ebe748d48313fa2a904617b0992be7da0ac Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.674370 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.674444 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.174427258 +0000 UTC m=+122.010667247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.674843 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.677432 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.177417064 +0000 UTC m=+122.013657053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.679474 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.683678 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.696051 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.697948 5109 patch_prober.go:28] interesting pod/console-operator-67c89758df-v8z7c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.697990 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" podUID="2034b852-cb28-4233-a522-58ff1fb7945c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.715301 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.723962 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-service-ca\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.729740 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.740467 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.745220 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.746036 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3c8fb21_9805_4b45_b5f4_0e5f1fb80351.slice/crio-0ea1192baa745225e2c25592570dd125d56bf922f5bb3e95a40fbacedac77654 WatchSource:0}: Error finding container 0ea1192baa745225e2c25592570dd125d56bf922f5bb3e95a40fbacedac77654: Status 404 returned error can't find the container with id 0ea1192baa745225e2c25592570dd125d56bf922f5bb3e95a40fbacedac77654 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.755759 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.764561 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-oauth-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.777707 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.777931 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.277905084 +0000 UTC m=+122.114145073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.778206 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.778864 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.278854301 +0000 UTC m=+122.115094290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.836323 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.844253 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.851203 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.854990 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.856832 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-console-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.875929 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.879330 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.879661 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.37964234 +0000 UTC m=+122.215882319 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.879802 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.880101 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.380083663 +0000 UTC m=+122.216323642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.889198 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/efdaca02-411e-4c67-adec-db205b4e67cf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.896982 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.898753 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.909583 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.912516 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf7d8d7_ef76_4af8_bc7e_91149dd703cf.slice/crio-e74f0972d3d507b0e80f55015c16ad18f369b5692d7407f631bac2e060bcc205 WatchSource:0}: Error finding container e74f0972d3d507b0e80f55015c16ad18f369b5692d7407f631bac2e060bcc205: Status 404 returned error can't find the container with id e74f0972d3d507b0e80f55015c16ad18f369b5692d7407f631bac2e060bcc205 Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.931582 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-68r6c\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.935392 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.944214 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.945512 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx"] Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.963998 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.967410 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7169447a-e4aa-4492-99f3-0d21fe813f69-trusted-ca-bundle\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:31 crc kubenswrapper[5109]: W0219 00:11:31.968824 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50579d9d_c5d2_4f39_9a96_39cbd4ee8976.slice/crio-2ddac219d307f2e9ad1b146b13856369bc1331fe8f4cf305120ffb60c195699b WatchSource:0}: Error finding container 2ddac219d307f2e9ad1b146b13856369bc1331fe8f4cf305120ffb60c195699b: Status 404 returned error can't find the container with id 2ddac219d307f2e9ad1b146b13856369bc1331fe8f4cf305120ffb60c195699b Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.977145 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.980881 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.980984 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.480963504 +0000 UTC m=+122.317203503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.981261 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:31 crc kubenswrapper[5109]: E0219 00:11:31.981589 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.481581182 +0000 UTC m=+122.317821171 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.994961 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-srv-cert\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:31 crc kubenswrapper[5109]: I0219 00:11:31.995385 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.002903 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-oauth-config\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.052127 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.055059 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.061562 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7169447a-e4aa-4492-99f3-0d21fe813f69-console-serving-cert\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.082326 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.082468 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.582415941 +0000 UTC m=+122.418655930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.084087 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.084654 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.584646356 +0000 UTC m=+122.420886345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.093249 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x42g4\" (UniqueName: \"kubernetes.io/projected/efdaca02-411e-4c67-adec-db205b4e67cf-kube-api-access-x42g4\") pod \"machine-config-controller-f9cdd68f7-pxg5n\" (UID: \"efdaca02-411e-4c67-adec-db205b4e67cf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.094985 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.101271 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlz7\" (UniqueName: \"kubernetes.io/projected/d90a5916-ed50-483f-84e3-ec9e44da92f5-kube-api-access-mtlz7\") pod \"router-default-68cf44c8b8-58zqj\" (UID: \"d90a5916-ed50-483f-84e3-ec9e44da92f5\") " pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.115314 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.121153 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxv8\" (UniqueName: \"kubernetes.io/projected/753c6b93-7309-452f-b10c-8aa1c730a48a-kube-api-access-7dxv8\") pod \"downloads-747b44746d-rgj5z\" (UID: \"753c6b93-7309-452f-b10c-8aa1c730a48a\") " pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.121878 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r8wc\" (UniqueName: \"kubernetes.io/projected/7169447a-e4aa-4492-99f3-0d21fe813f69-kube-api-access-7r8wc\") pod \"console-64d44f6ddf-4d9db\" (UID: \"7169447a-e4aa-4492-99f3-0d21fe813f69\") " pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.134868 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.140126 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.155250 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.166384 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-apiservice-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.167444 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7fffb6-c104-482f-8c6a-33b3dd961b62-webhook-cert\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.185473 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.185617 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.685589499 +0000 UTC m=+122.521829598 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.186233 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.186751 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.686735842 +0000 UTC m=+122.522975851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.194822 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.202215 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-cabundle\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.252764 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tmn8\" (UniqueName: \"kubernetes.io/projected/decc90f6-d956-4221-b02d-e2e28b9f307a-kube-api-access-7tmn8\") pod \"csi-hostpathplugin-whng8\" (UID: \"decc90f6-d956-4221-b02d-e2e28b9f307a\") " pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.287495 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.287653 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.787606193 +0000 UTC m=+122.623846182 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.288176 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.288570 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.78855456 +0000 UTC m=+122.624794589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.332756 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtgm4\" (UniqueName: \"kubernetes.io/projected/5e797401-b4ca-4489-9d49-5c3d32bd20e6-kube-api-access-gtgm4\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.348362 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-whng8" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.353250 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc"] Feb 19 00:11:32 crc kubenswrapper[5109]: W0219 00:11:32.361491 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29790027_9f37_464a_aa38_74b8232996e9.slice/crio-aaa64f2c0481c9273513217fcae8981ad00e67397e9270cd96e1fbf1d945876c WatchSource:0}: Error finding container aaa64f2c0481c9273513217fcae8981ad00e67397e9270cd96e1fbf1d945876c: Status 404 returned error can't find the container with id aaa64f2c0481c9273513217fcae8981ad00e67397e9270cd96e1fbf1d945876c Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.370388 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8cg2\" (UniqueName: \"kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.374942 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.384122 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffa4ee7c-f211-40a7-ae2d-8996d8533102-config\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.388933 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.389037 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.889019969 +0000 UTC m=+122.725259958 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.389403 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.389770 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.889757241 +0000 UTC m=+122.725997230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.414696 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tcd6\" (UniqueName: \"kubernetes.io/projected/d0b307e4-b2bd-4498-be5e-38320e2b1350-kube-api-access-9tcd6\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.414894 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.429922 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-certs\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.443900 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.464011 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.471524 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffa4ee7c-f211-40a7-ae2d-8996d8533102-serving-cert\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.471684 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/859e96d6-c432-4486-9efc-9e57147a0cdc-signing-key\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.477492 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.487451 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.490958 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.491592 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:32.991572249 +0000 UTC m=+122.827812238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.495624 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.503387 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0b307e4-b2bd-4498-be5e-38320e2b1350-config-volume\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.515791 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.532666 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.535210 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.546488 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e46cdbd-071c-446c-bee4-462001f9ef85-config\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.555265 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.570147 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0b307e4-b2bd-4498-be5e-38320e2b1350-metrics-tls\") pod \"dns-default-trt7v\" (UID: \"d0b307e4-b2bd-4498-be5e-38320e2b1350\") " pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573606 5109 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573680 5109 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573706 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert podName:8e46cdbd-071c-446c-bee4-462001f9ef85 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073682238 +0000 UTC m=+122.909922227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert") pod "service-ca-operator-5b9c976747-kwkd6" (UID: "8e46cdbd-071c-446c-bee4-462001f9ef85") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573727 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca podName:dd92fdf2-3d74-4fac-af8c-c7fe7b025492 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073714319 +0000 UTC m=+122.909954308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca") pod "marketplace-operator-547dbd544d-ddddh" (UID: "dd92fdf2-3d74-4fac-af8c-c7fe7b025492") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573756 5109 configmap.go:193] Couldn't get configMap openshift-multus/cni-sysctl-allowlist: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573786 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist podName:6a76c696-18d1-491c-9d23-36e91f949eed nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073777291 +0000 UTC m=+122.910017280 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cni-sysctl-allowlist" (UniqueName: "kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist") pod "cni-sysctl-allowlist-ds-tt7nq" (UID: "6a76c696-18d1-491c-9d23-36e91f949eed") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573806 5109 secret.go:189] Couldn't get secret openshift-kube-storage-version-migrator-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573838 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert podName:d2e6c049-ef77-4bad-ab30-b499a7850c20 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073828352 +0000 UTC m=+122.910068341 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert") pod "kube-storage-version-migrator-operator-565b79b866-sqcqv" (UID: "d2e6c049-ef77-4bad-ab30-b499a7850c20") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573870 5109 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573904 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token podName:5e797401-b4ca-4489-9d49-5c3d32bd20e6 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073891974 +0000 UTC m=+122.910131963 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token") pod "machine-config-server-l48tx" (UID: "5e797401-b4ca-4489-9d49-5c3d32bd20e6") : failed to sync secret cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573942 5109 configmap.go:193] Couldn't get configMap openshift-kube-storage-version-migrator-operator/config: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.573976 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config podName:d2e6c049-ef77-4bad-ab30-b499a7850c20 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.073966666 +0000 UTC m=+122.910206655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config") pod "kube-storage-version-migrator-operator-565b79b866-sqcqv" (UID: "d2e6c049-ef77-4bad-ab30-b499a7850c20") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.574011 5109 request.go:752] "Waited before sending request" delay="1.000935205s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/configmaps?fieldSelector=metadata.name%3Dconfig&resourceVersion=36779" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.575887 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.579149 5109 projected.go:289] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.579181 5109 projected.go:194] Error preparing data for projected volume kube-api-access-nnhlp for pod openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.579268 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp podName:8bf22cea-38f6-463c-97e7-b2a7feec536c nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.079248559 +0000 UTC m=+122.915488548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nnhlp" (UniqueName: "kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp") pod "migrator-866fcbc849-hfxtc" (UID: "8bf22cea-38f6-463c-97e7-b2a7feec536c") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.589233 5109 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" secret="" err="failed to sync secret cache: timed out waiting for the condition" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.589308 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.593043 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.593449 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.093435478 +0000 UTC m=+122.929675467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.594597 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.596765 5109 projected.go:289] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.596785 5109 projected.go:289] Couldn't get configMap openshift-operator-lifecycle-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.596816 5109 projected.go:194] Error preparing data for projected volume kube-api-access-6nb6g for pod openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.596791 5109 projected.go:194] Error preparing data for projected volume kube-api-access-z4mqt for pod openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.596908 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g podName:baf9561a-4502-4e7e-b9af-acb69d721496 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.096882808 +0000 UTC m=+122.933122807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6nb6g" (UniqueName: "kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g") pod "catalog-operator-75ff9f647d-8fkxh" (UID: "baf9561a-4502-4e7e-b9af-acb69d721496") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.597006 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt podName:315ba213-ba49-4ab6-8b38-e3abe28ee907 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.096927699 +0000 UTC m=+122.933167688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-z4mqt" (UniqueName: "kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt") pod "collect-profiles-29524320-r8sfn" (UID: "315ba213-ba49-4ab6-8b38-e3abe28ee907") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.610697 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-whng8"] Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.614602 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.616126 5109 projected.go:289] Couldn't get configMap openshift-etcd-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.616157 5109 projected.go:194] Error preparing data for projected volume kube-api-access-cbt6n for pod openshift-etcd-operator/etcd-operator-69b85846b6-slgm9: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.616233 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n podName:a07a6721-367c-4f7a-b6a6-0266df632216 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.116215356 +0000 UTC m=+122.952455345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cbt6n" (UniqueName: "kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n") pod "etcd-operator-69b85846b6-slgm9" (UID: "a07a6721-367c-4f7a-b6a6-0266df632216") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.636352 5109 projected.go:289] Couldn't get configMap openshift-controller-manager-operator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.636386 5109 projected.go:194] Error preparing data for projected volume kube-api-access-jf27f for pod openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.636460 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f podName:caabdbf4-9047-45d1-a1ae-84fee87393c9 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.136440099 +0000 UTC m=+122.972680088 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jf27f" (UniqueName: "kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f") pod "openshift-controller-manager-operator-686468bdd5-rm9p5" (UID: "caabdbf4-9047-45d1-a1ae-84fee87393c9") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.653179 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" event={"ID":"6fe136ed-c904-47d5-8df2-13350ff341d9","Type":"ContainerStarted","Data":"63207a015200304b5292655ce3cfd56d73fd6d5fdd8e4f8d7cbc7a05638d1358"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.654498 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" event={"ID":"50579d9d-c5d2-4f39-9a96-39cbd4ee8976","Type":"ContainerStarted","Data":"2a2594892955f104e683201486dac4ccf27d98b4af7eb64aa7ff1f202af90b4e"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.654544 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" event={"ID":"50579d9d-c5d2-4f39-9a96-39cbd4ee8976","Type":"ContainerStarted","Data":"75ec0b748034c7c63347271efb5d47b7baeba155695d59d4296c009163ce4595"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.654557 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" event={"ID":"50579d9d-c5d2-4f39-9a96-39cbd4ee8976","Type":"ContainerStarted","Data":"2ddac219d307f2e9ad1b146b13856369bc1331fe8f4cf305120ffb60c195699b"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.658038 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" event={"ID":"c65a4832-f511-4d14-8d80-25a2129b8e3a","Type":"ContainerStarted","Data":"ca6cdf13e7489b91e5d4fe796c73370ebefa8e1633ecae31d8d466c564f8b48f"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.658065 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" event={"ID":"c65a4832-f511-4d14-8d80-25a2129b8e3a","Type":"ContainerStarted","Data":"5e8d023e5536931d7b382de9bb2747f9bbcc50be014b3bf3b0d2c90566477963"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.659770 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" event={"ID":"45675682-2073-4412-90c7-940bf3274c7c","Type":"ContainerStarted","Data":"e9830166efad70fc58213859058dd1e6748ba7c2e31ef300c89430fffbbe4829"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.661009 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" event={"ID":"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf","Type":"ContainerStarted","Data":"4fe72773ab230f0fc30e5965d8b96547cb6b797ab8644ff1a900d78244be3f9b"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.661039 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" event={"ID":"dbf7d8d7-ef76-4af8-bc7e-91149dd703cf","Type":"ContainerStarted","Data":"e74f0972d3d507b0e80f55015c16ad18f369b5692d7407f631bac2e060bcc205"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.662504 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.664050 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" event={"ID":"c4130b11-7b60-4ee2-a12b-b498e2944738","Type":"ContainerStarted","Data":"6779b55df68e74523e07d8fd7a3f42a44a4fac638fab9b6fc26e351179b5b17e"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.664084 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" event={"ID":"c4130b11-7b60-4ee2-a12b-b498e2944738","Type":"ContainerStarted","Data":"24051b08f36a7a9d697c74a0158c954d57c75f5f3c6e98ca4e156c1717deaa6e"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.665577 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" event={"ID":"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351","Type":"ContainerStarted","Data":"e0a253e6fe22047a9025b97cdf47a8a565c0b75564aea5d3bdfd5111443dde87"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.665733 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" event={"ID":"d3c8fb21-9805-4b45-b5f4-0e5f1fb80351","Type":"ContainerStarted","Data":"0ea1192baa745225e2c25592570dd125d56bf922f5bb3e95a40fbacedac77654"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.667247 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" event={"ID":"bd0078b7-6236-4b58-a64f-bcb5753c7a89","Type":"ContainerStarted","Data":"423f44272e8d62ac8c1d46de208e22f64918fbb9da346ce29caf60f04addf944"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.667367 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" event={"ID":"bd0078b7-6236-4b58-a64f-bcb5753c7a89","Type":"ContainerStarted","Data":"ed1d7325101c99adcc16646ac4ee499e597fc8b539910f4e71acf1c345930dbb"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.667442 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" event={"ID":"bd0078b7-6236-4b58-a64f-bcb5753c7a89","Type":"ContainerStarted","Data":"062f9baf4867b1a8cd69d839822c4ebe748d48313fa2a904617b0992be7da0ac"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.669441 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" event={"ID":"070d6fda-192f-47cb-b873-192e072ff078","Type":"ContainerStarted","Data":"b4463a3a2ca2e39228d2602f540272741e3a4f019733ece8599fbfa48cd3878f"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.669710 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" event={"ID":"070d6fda-192f-47cb-b873-192e072ff078","Type":"ContainerStarted","Data":"18aad4643c33a67d86387b9f235a4f3b7602dbca518c08c690b31ef25790500e"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.669724 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" event={"ID":"070d6fda-192f-47cb-b873-192e072ff078","Type":"ContainerStarted","Data":"1ddd2624a039d6ed03a02c199a89f6cbe9e63632d568f4609af72d641ab326c5"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.671533 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" event={"ID":"fd26dc84-70f4-4c4c-b03b-556651eba161","Type":"ContainerStarted","Data":"045c8b3dfa8026cf3743e005c9bd4a122cf2c54d142b5be0d5b64e3fbb549599"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.675684 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" event={"ID":"e5d3ff4f-4af6-4aec-a501-3e4995505046","Type":"ContainerStarted","Data":"6cb28d0b2539a602410ab344fd3e33c7bf828b2f5423f0bf86c2a82d137a3170"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.675742 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.676289 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.678689 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whng8" event={"ID":"decc90f6-d956-4221-b02d-e2e28b9f307a","Type":"ContainerStarted","Data":"38737e578d25da86f872664c4ce6982997810ef747453c1db15a529b859474e9"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.679779 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" event={"ID":"29790027-9f37-464a-aa38-74b8232996e9","Type":"ContainerStarted","Data":"aaa64f2c0481c9273513217fcae8981ad00e67397e9270cd96e1fbf1d945876c"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.683324 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" event={"ID":"ffac205b-047e-4cf8-bcc5-39a818ee5655","Type":"ContainerStarted","Data":"86ef05141ce80e0771e179df6537d063346d9fc4316f14154f658d6d5fe5223a"} Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.688297 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.689502 5109 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-mxvtz container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.689551 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.691397 5109 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-nsncq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" start-of-body= Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.691465 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": dial tcp 10.217.0.28:6443: connect: connection refused" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.692095 5109 patch_prober.go:28] interesting pod/console-operator-67c89758df-v8z7c container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.692139 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" podUID="2034b852-cb28-4233-a522-58ff1fb7945c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/readyz\": dial tcp 10.217.0.21:8443: connect: connection refused" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.694944 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.695624 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.195594246 +0000 UTC m=+123.031834235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.696713 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.704059 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.704348 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.204334929 +0000 UTC m=+123.040574918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.735869 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.756492 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.775761 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.786378 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rr6gc\" (UniqueName: \"kubernetes.io/projected/ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a-kube-api-access-rr6gc\") pod \"package-server-manager-77f986bd66-rzdqn\" (UID: \"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.788590 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6"] Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.788814 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bjjv\" (UniqueName: \"kubernetes.io/projected/3bfc9251-3e6e-4a23-b109-44bf2f780c4d-kube-api-access-9bjjv\") pod \"olm-operator-5cdf44d969-kk8zl\" (UID: \"3bfc9251-3e6e-4a23-b109-44bf2f780c4d\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.790828 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwj6g\" (UniqueName: \"kubernetes.io/projected/3d7fffb6-c104-482f-8c6a-33b3dd961b62-kube-api-access-zwj6g\") pod \"packageserver-7d4fc7d867-ggz6s\" (UID: \"3d7fffb6-c104-482f-8c6a-33b3dd961b62\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.795765 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.805393 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.807505 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.307476565 +0000 UTC m=+123.143716564 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.814893 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.816227 5109 projected.go:289] Couldn't get configMap openshift-ingress-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.857747 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.878315 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.885968 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.894606 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.909433 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:32 crc kubenswrapper[5109]: E0219 00:11:32.909923 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.409903111 +0000 UTC m=+123.246143100 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:32 crc kubenswrapper[5109]: W0219 00:11:32.931488 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd90a5916_ed50_483f_84e3_ec9e44da92f5.slice/crio-1429b3230626066a6b241ed1654fe9ef266812425601a8ba8e5c5c4b2926a3f6 WatchSource:0}: Error finding container 1429b3230626066a6b241ed1654fe9ef266812425601a8ba8e5c5c4b2926a3f6: Status 404 returned error can't find the container with id 1429b3230626066a6b241ed1654fe9ef266812425601a8ba8e5c5c4b2926a3f6 Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.938143 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.938509 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.945651 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.959723 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.974805 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.983469 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-27tzc\" (UniqueName: \"kubernetes.io/projected/691b2ad9-f837-4d45-a2bb-b99130bad14f-kube-api-access-27tzc\") pod \"ingress-canary-pnlfz\" (UID: \"691b2ad9-f837-4d45-a2bb-b99130bad14f\") " pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.983867 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ffa4ee7c-f211-40a7-ae2d-8996d8533102-kube-api-access\") pod \"kube-apiserver-operator-575994946d-gd89d\" (UID: \"ffa4ee7c-f211-40a7-ae2d-8996d8533102\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:32 crc kubenswrapper[5109]: I0219 00:11:32.995983 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.003977 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.012127 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.012240 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.512206212 +0000 UTC m=+123.348446211 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.012746 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.013162 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.513146989 +0000 UTC m=+123.349386978 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.016371 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.018976 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.040885 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.046508 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.057084 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.088021 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.109783 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114237 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114366 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114400 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114423 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nnhlp\" (UniqueName: \"kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp\") pod \"migrator-866fcbc849-hfxtc\" (UID: \"8bf22cea-38f6-463c-97e7-b2a7feec536c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114463 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114478 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114501 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114542 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114665 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6nb6g\" (UniqueName: \"kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.114691 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.115433 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2e6c049-ef77-4bad-ab30-b499a7850c20-config\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.115512 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.615495093 +0000 UTC m=+123.451735082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.116692 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.117891 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.123522 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-tt7nq\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.123839 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e46cdbd-071c-446c-bee4-462001f9ef85-serving-cert\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.123911 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.124961 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5e797401-b4ca-4489-9d49-5c3d32bd20e6-node-bootstrap-token\") pod \"machine-config-server-l48tx\" (UID: \"5e797401-b4ca-4489-9d49-5c3d32bd20e6\") " pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.126013 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.129742 5109 projected.go:194] Error preparing data for projected volume kube-api-access-ss62t for pod openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s: failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.129859 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t podName:37b7e6dc-12f7-4753-a22a-36fdc2abe7b6 nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.629836557 +0000 UTC m=+123.466076546 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ss62t" (UniqueName: "kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t") pod "ingress-operator-6b9cb4dbcf-6p97s" (UID: "37b7e6dc-12f7-4753-a22a-36fdc2abe7b6") : failed to sync configmap cache: timed out waiting for the condition Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.130254 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.136768 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2e6c049-ef77-4bad-ab30-b499a7850c20-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.140607 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnhlp\" (UniqueName: \"kubernetes.io/projected/8bf22cea-38f6-463c-97e7-b2a7feec536c-kube-api-access-nnhlp\") pod \"migrator-866fcbc849-hfxtc\" (UID: \"8bf22cea-38f6-463c-97e7-b2a7feec536c\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.141381 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nb6g\" (UniqueName: \"kubernetes.io/projected/baf9561a-4502-4e7e-b9af-acb69d721496-kube-api-access-6nb6g\") pod \"catalog-operator-75ff9f647d-8fkxh\" (UID: \"baf9561a-4502-4e7e-b9af-acb69d721496\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.144565 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") pod \"collect-profiles-29524320-r8sfn\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.169235 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.178921 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.192380 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67km\" (UniqueName: \"kubernetes.io/projected/d2e6c049-ef77-4bad-ab30-b499a7850c20-kube-api-access-p67km\") pod \"kube-storage-version-migrator-operator-565b79b866-sqcqv\" (UID: \"d2e6c049-ef77-4bad-ab30-b499a7850c20\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.197923 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.202282 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ktrg\" (UniqueName: \"kubernetes.io/projected/859e96d6-c432-4486-9efc-9e57147a0cdc-kube-api-access-8ktrg\") pod \"service-ca-74545575db-zhjpv\" (UID: \"859e96d6-c432-4486-9efc-9e57147a0cdc\") " pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.205445 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-24vbc\" (UniqueName: \"kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc\") pod \"marketplace-operator-547dbd544d-ddddh\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.215363 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.215995 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.216052 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cbt6n\" (UniqueName: \"kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.216895 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.217304 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.71728202 +0000 UTC m=+123.553522079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.221403 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jf27f\" (UniqueName: \"kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.226479 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbt6n\" (UniqueName: \"kubernetes.io/projected/a07a6721-367c-4f7a-b6a6-0266df632216-kube-api-access-cbt6n\") pod \"etcd-operator-69b85846b6-slgm9\" (UID: \"a07a6721-367c-4f7a-b6a6-0266df632216\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.226836 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfngp\" (UniqueName: \"kubernetes.io/projected/8e46cdbd-071c-446c-bee4-462001f9ef85-kube-api-access-tfngp\") pod \"service-ca-operator-5b9c976747-kwkd6\" (UID: \"8e46cdbd-071c-446c-bee4-462001f9ef85\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.228697 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf27f\" (UniqueName: \"kubernetes.io/projected/caabdbf4-9047-45d1-a1ae-84fee87393c9-kube-api-access-jf27f\") pod \"openshift-controller-manager-operator-686468bdd5-rm9p5\" (UID: \"caabdbf4-9047-45d1-a1ae-84fee87393c9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.229438 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-pnlfz" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.257023 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.262332 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.278934 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.282871 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-l48tx" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.321003 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.322469 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.323316 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.323800 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.823783124 +0000 UTC m=+123.660023103 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.338119 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.348980 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.379293 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.382707 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.399109 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.399288 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.418695 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 19 00:11:33 crc kubenswrapper[5109]: W0219 00:11:33.421823 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e797401_b4ca_4489_9d49_5c3d32bd20e6.slice/crio-ea421a7a13b94c755aa73b7d11771f285e57a67c25043ff74c9b1111b8fb302c WatchSource:0}: Error finding container ea421a7a13b94c755aa73b7d11771f285e57a67c25043ff74c9b1111b8fb302c: Status 404 returned error can't find the container with id ea421a7a13b94c755aa73b7d11771f285e57a67c25043ff74c9b1111b8fb302c Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.424676 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.425048 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:33.925037686 +0000 UTC m=+123.761277675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.425081 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.462174 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-rgj5z"] Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.476953 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.488213 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-zhjpv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.489421 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36958: no serving certificate available for the kubelet" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.500382 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.503144 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.517967 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.521889 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.526130 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.526470 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.026455522 +0000 UTC m=+123.862695511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.564184 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.573320 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.584074 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36962: no serving certificate available for the kubelet" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.584377 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n"] Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.628440 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.628960 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.12894267 +0000 UTC m=+123.965182659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.678999 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36968: no serving certificate available for the kubelet" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.695141 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-trt7v"] Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.705433 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l48tx" event={"ID":"5e797401-b4ca-4489-9d49-5c3d32bd20e6","Type":"ContainerStarted","Data":"ea421a7a13b94c755aa73b7d11771f285e57a67c25043ff74c9b1111b8fb302c"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.706623 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" event={"ID":"d90a5916-ed50-483f-84e3-ec9e44da92f5","Type":"ContainerStarted","Data":"007ba7cb37ad93e8a56a84991c67e4f47f20df507cdca8fc01491f86c2d177e4"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.706660 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" event={"ID":"d90a5916-ed50-483f-84e3-ec9e44da92f5","Type":"ContainerStarted","Data":"1429b3230626066a6b241ed1654fe9ef266812425601a8ba8e5c5c4b2926a3f6"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.709314 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" event={"ID":"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91","Type":"ContainerStarted","Data":"624f48c7195a6897b011e007ac4550cb113424537cfff0735edae017b9a6887c"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.709372 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" event={"ID":"d2ad403f-3bd2-4b56-8b7a-60ea6b409f91","Type":"ContainerStarted","Data":"ee55f4f7cfd5dc52d80b13d9ad74caffc52ed127f91eb47a4113e500deecb097"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.722935 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" event={"ID":"6a76c696-18d1-491c-9d23-36e91f949eed","Type":"ContainerStarted","Data":"ecb9334b93695da60442069932e925545359541391a3c220dc1f53b9bab7667c"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.735389 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.735518 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.235483704 +0000 UTC m=+124.071723693 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.735975 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.736127 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ss62t\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.738017 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.237997127 +0000 UTC m=+124.074237126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.750115 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-4d9db"] Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.750163 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-rgj5z" event={"ID":"753c6b93-7309-452f-b10c-8aa1c730a48a","Type":"ContainerStarted","Data":"5f83dd4a8692dad708d33f52c8b960b5c6f4beb3e9a280dd20bf5a2d8548562e"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.759961 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss62t\" (UniqueName: \"kubernetes.io/projected/37b7e6dc-12f7-4753-a22a-36fdc2abe7b6-kube-api-access-ss62t\") pod \"ingress-operator-6b9cb4dbcf-6p97s\" (UID: \"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.775402 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36972: no serving certificate available for the kubelet" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.779067 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" event={"ID":"29790027-9f37-464a-aa38-74b8232996e9","Type":"ContainerStarted","Data":"f3634eff8caa143b04dd3677a6e7687e91dfaa2f4ccc80bc37ee05f7e4d8d975"} Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.799099 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-7lfng" podStartSLOduration=101.79908105 podStartE2EDuration="1m41.79908105s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:33.792570022 +0000 UTC m=+123.628810011" watchObservedRunningTime="2026-02-19 00:11:33.79908105 +0000 UTC m=+123.635321039" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.835835 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mtsdx" podStartSLOduration=101.83581172 podStartE2EDuration="1m41.83581172s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:33.834296246 +0000 UTC m=+123.670536235" watchObservedRunningTime="2026-02-19 00:11:33.83581172 +0000 UTC m=+123.672051709" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.840444 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.840647 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.340586427 +0000 UTC m=+124.176826416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.841500 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.849795 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.349781123 +0000 UTC m=+124.186021112 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.888121 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36982: no serving certificate available for the kubelet" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.888190 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.945123 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:33 crc kubenswrapper[5109]: E0219 00:11:33.947296 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.447274036 +0000 UTC m=+124.283514045 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.978269 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.979578 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" Feb 19 00:11:33 crc kubenswrapper[5109]: I0219 00:11:33.986140 5109 ???:1] "http: TLS handshake error from 192.168.126.11:36990: no serving certificate available for the kubelet" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.020355 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" podStartSLOduration=102.020336465 podStartE2EDuration="1m42.020336465s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:33.966182032 +0000 UTC m=+123.802422021" watchObservedRunningTime="2026-02-19 00:11:34.020336465 +0000 UTC m=+123.856576454" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.021152 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" podStartSLOduration=102.021146178 podStartE2EDuration="1m42.021146178s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.020278623 +0000 UTC m=+123.856518612" watchObservedRunningTime="2026-02-19 00:11:34.021146178 +0000 UTC m=+123.857386167" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.032241 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.033520 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.046887 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.047317 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.547300543 +0000 UTC m=+124.383540532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.056724 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29524320-lgkhz" podStartSLOduration=102.056710144 podStartE2EDuration="1m42.056710144s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.055270793 +0000 UTC m=+123.891510772" watchObservedRunningTime="2026-02-19 00:11:34.056710144 +0000 UTC m=+123.892950133" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.092262 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" podStartSLOduration=102.09225026 podStartE2EDuration="1m42.09225026s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.090439998 +0000 UTC m=+123.926679987" watchObservedRunningTime="2026-02-19 00:11:34.09225026 +0000 UTC m=+123.928490249" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.119123 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37002: no serving certificate available for the kubelet" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.153121 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.153362 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.653332143 +0000 UTC m=+124.489572132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.153901 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.154188 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.654181837 +0000 UTC m=+124.490421826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.208314 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-5464h" podStartSLOduration=102.208294369 podStartE2EDuration="1m42.208294369s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.207131555 +0000 UTC m=+124.043371554" watchObservedRunningTime="2026-02-19 00:11:34.208294369 +0000 UTC m=+124.044534358" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.240006 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37018: no serving certificate available for the kubelet" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.260418 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.262532 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.762509893 +0000 UTC m=+124.598749882 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.262598 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.262979 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.762973227 +0000 UTC m=+124.599213216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.297329 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" podStartSLOduration=102.297317028 podStartE2EDuration="1m42.297317028s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.296409861 +0000 UTC m=+124.132649850" watchObservedRunningTime="2026-02-19 00:11:34.297317028 +0000 UTC m=+124.133557017" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.306837 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.369002 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.369328 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.869312635 +0000 UTC m=+124.705552624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.403787 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" podStartSLOduration=102.40377342 podStartE2EDuration="1m42.40377342s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.402182604 +0000 UTC m=+124.238422593" watchObservedRunningTime="2026-02-19 00:11:34.40377342 +0000 UTC m=+124.240013409" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.442292 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-j969t" podStartSLOduration=102.442271681 podStartE2EDuration="1m42.442271681s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.438050509 +0000 UTC m=+124.274290498" watchObservedRunningTime="2026-02-19 00:11:34.442271681 +0000 UTC m=+124.278511670" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.477421 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.477878 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:34.977863008 +0000 UTC m=+124.814103007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.491781 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-4tgzn" podStartSLOduration=102.491760279 podStartE2EDuration="1m42.491760279s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.480567866 +0000 UTC m=+124.316807855" watchObservedRunningTime="2026-02-19 00:11:34.491760279 +0000 UTC m=+124.328000268" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.538724 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" podStartSLOduration=102.538708784 podStartE2EDuration="1m42.538708784s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.537004215 +0000 UTC m=+124.373244204" watchObservedRunningTime="2026-02-19 00:11:34.538708784 +0000 UTC m=+124.374948773" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.581357 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.581722 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.081688434 +0000 UTC m=+124.917928423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.582046 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" podStartSLOduration=102.582036814 podStartE2EDuration="1m42.582036814s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.580941892 +0000 UTC m=+124.417181881" watchObservedRunningTime="2026-02-19 00:11:34.582036814 +0000 UTC m=+124.418276803" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.706490 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.716249 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.216230537 +0000 UTC m=+125.052470526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.783833 5109 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-nsncq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.28:6443/healthz\": context deadline exceeded" start-of-body= Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.783903 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.28:6443/healthz\": context deadline exceeded" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.811830 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.812176 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.312160885 +0000 UTC m=+125.148400874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.819039 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-l48tx" event={"ID":"5e797401-b4ca-4489-9d49-5c3d32bd20e6","Type":"ContainerStarted","Data":"08793290f4e85fb3ec06cdafdf5e69b900aab84490e4c98b5b44c0b412743fd8"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.843958 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.847134 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" event={"ID":"3bfc9251-3e6e-4a23-b109-44bf2f780c4d","Type":"ContainerStarted","Data":"5f60c5c11ac96e9bd61f26cd4de276144ca081f8a6de786274880178cd2c87b4"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.847462 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.860863 5109 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kk8zl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.860914 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" podUID="3bfc9251-3e6e-4a23-b109-44bf2f780c4d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.864060 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-4d9db" event={"ID":"7169447a-e4aa-4492-99f3-0d21fe813f69","Type":"ContainerStarted","Data":"89b5823d2f55ffefd5fe3b5db2af208a48f61e5d596ec661d832cc8a8c9f2e9a"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.864106 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-4d9db" event={"ID":"7169447a-e4aa-4492-99f3-0d21fe813f69","Type":"ContainerStarted","Data":"eb001ab0a42fb152c5d7898e597c281ac6d232cc12d32a5f74a8de1bce657349"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.879923 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trt7v" event={"ID":"d0b307e4-b2bd-4498-be5e-38320e2b1350","Type":"ContainerStarted","Data":"8379c62835a6f2a397213f9041d4d7ee8856b4da1e5364f483cc09cdc3d8bc05"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.887882 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-p2dmz" podStartSLOduration=102.88786581 podStartE2EDuration="1m42.88786581s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.882806784 +0000 UTC m=+124.719046773" watchObservedRunningTime="2026-02-19 00:11:34.88786581 +0000 UTC m=+124.724105799" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.888794 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-slgm9"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.888865 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d"] Feb 19 00:11:34 crc kubenswrapper[5109]: W0219 00:11:34.895187 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d7fffb6_c104_482f_8c6a_33b3dd961b62.slice/crio-8fa183808dedee5c54ff9dfb718a1f0cf1d7a1a3313b5dd710850efae4ae0ab7 WatchSource:0}: Error finding container 8fa183808dedee5c54ff9dfb718a1f0cf1d7a1a3313b5dd710850efae4ae0ab7: Status 404 returned error can't find the container with id 8fa183808dedee5c54ff9dfb718a1f0cf1d7a1a3313b5dd710850efae4ae0ab7 Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.895260 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" event={"ID":"efdaca02-411e-4c67-adec-db205b4e67cf","Type":"ContainerStarted","Data":"43eb25f65fb684a3323003883ed9d1ec5ee919f48ae87effb3e44ab025ee3f5b"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.895288 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" event={"ID":"efdaca02-411e-4c67-adec-db205b4e67cf","Type":"ContainerStarted","Data":"ca4c8dd0278f49de940d8eecdad5ca3941525fbd53692847b65e097fb2439e13"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.895298 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" event={"ID":"efdaca02-411e-4c67-adec-db205b4e67cf","Type":"ContainerStarted","Data":"3bcba518226e386bc8f64deb45e424b2b28dcf27f3f39b76e9557f83bb947233"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.898000 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:34 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:34 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:34 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.898040 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.916867 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:34 crc kubenswrapper[5109]: E0219 00:11:34.918070 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.418057051 +0000 UTC m=+125.254297040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.922248 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" event={"ID":"6a76c696-18d1-491c-9d23-36e91f949eed","Type":"ContainerStarted","Data":"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.922999 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.943690 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.945416 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-pnlfz"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.947939 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-rgj5z" event={"ID":"753c6b93-7309-452f-b10c-8aa1c730a48a","Type":"ContainerStarted","Data":"45f9daf2893a27f442c2ad77d7dc0df58724ab985bffcffdf84d6db008f32811"} Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.958800 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.960201 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.964885 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn"] Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.970782 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-rgj5z container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.970844 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-rgj5z" podUID="753c6b93-7309-452f-b10c-8aa1c730a48a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.971564 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37028: no serving certificate available for the kubelet" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.978076 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:11:34 crc kubenswrapper[5109]: I0219 00:11:34.985612 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-2zvq6" podStartSLOduration=102.98559808 podStartE2EDuration="1m42.98559808s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:34.983748857 +0000 UTC m=+124.819988846" watchObservedRunningTime="2026-02-19 00:11:34.98559808 +0000 UTC m=+124.821838069" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.017907 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.018718 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.518701475 +0000 UTC m=+125.354941464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.022531 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-nqqjk" podStartSLOduration=103.022514085 podStartE2EDuration="1m43.022514085s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.015986337 +0000 UTC m=+124.852226326" watchObservedRunningTime="2026-02-19 00:11:35.022514085 +0000 UTC m=+124.858754074" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.028330 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-wtftk" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.028364 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.036824 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podStartSLOduration=103.036811918 podStartE2EDuration="1m43.036811918s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.034982565 +0000 UTC m=+124.871222544" watchObservedRunningTime="2026-02-19 00:11:35.036811918 +0000 UTC m=+124.873051907" Feb 19 00:11:35 crc kubenswrapper[5109]: W0219 00:11:35.046026 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod691b2ad9_f837_4d45_a2bb_b99130bad14f.slice/crio-20c21e16bff5fccc78ff6216d9bd82906c854c590b52bb1eac10668fe2fa4b26 WatchSource:0}: Error finding container 20c21e16bff5fccc78ff6216d9bd82906c854c590b52bb1eac10668fe2fa4b26: Status 404 returned error can't find the container with id 20c21e16bff5fccc78ff6216d9bd82906c854c590b52bb1eac10668fe2fa4b26 Feb 19 00:11:35 crc kubenswrapper[5109]: W0219 00:11:35.075328 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbaf9561a_4502_4e7e_b9af_acb69d721496.slice/crio-f1f3e76a9cfd3679cf3fe25c91f38dd994bc72d27634fc06931df97c4c9cf1bb WatchSource:0}: Error finding container f1f3e76a9cfd3679cf3fe25c91f38dd994bc72d27634fc06931df97c4c9cf1bb: Status 404 returned error can't find the container with id f1f3e76a9cfd3679cf3fe25c91f38dd994bc72d27634fc06931df97c4c9cf1bb Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.087371 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-j8qfk" podStartSLOduration=103.087356417 podStartE2EDuration="1m43.087356417s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.085189714 +0000 UTC m=+124.921429703" watchObservedRunningTime="2026-02-19 00:11:35.087356417 +0000 UTC m=+124.923596396" Feb 19 00:11:35 crc kubenswrapper[5109]: W0219 00:11:35.106889 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf22cea_38f6_463c_97e7_b2a7feec536c.slice/crio-3bc4140065593639a588f621a68166cd9c4ca4dde3707641fee1c3434c64c489 WatchSource:0}: Error finding container 3bc4140065593639a588f621a68166cd9c4ca4dde3707641fee1c3434c64c489: Status 404 returned error can't find the container with id 3bc4140065593639a588f621a68166cd9c4ca4dde3707641fee1c3434c64c489 Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.107827 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.113422 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-zhjpv"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.120463 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.124012 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.623993414 +0000 UTC m=+125.460233403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.152441 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.199593 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.210923 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.225148 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.226885 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.726834152 +0000 UTC m=+125.563074151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: W0219 00:11:35.244783 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd92fdf2_3d74_4fac_af8c_c7fe7b025492.slice/crio-e0bf08f408eb0008b939137485c837a009523fe04a20c4fb60a51e6049f7f4b6 WatchSource:0}: Error finding container e0bf08f408eb0008b939137485c837a009523fe04a20c4fb60a51e6049f7f4b6: Status 404 returned error can't find the container with id e0bf08f408eb0008b939137485c837a009523fe04a20c4fb60a51e6049f7f4b6 Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.286963 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-vqhpb" podStartSLOduration=103.286944206 podStartE2EDuration="1m43.286944206s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.249105214 +0000 UTC m=+125.085345203" watchObservedRunningTime="2026-02-19 00:11:35.286944206 +0000 UTC m=+125.123184195" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.312802 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s"] Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.326937 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.327324 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.827311221 +0000 UTC m=+125.663551210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.427968 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.428296 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.928253174 +0000 UTC m=+125.764493153 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.429255 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.429755 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:35.929747667 +0000 UTC m=+125.765987656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.493114 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-qpwhk" podStartSLOduration=103.493100345 podStartE2EDuration="1m43.493100345s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.491070557 +0000 UTC m=+125.327310546" watchObservedRunningTime="2026-02-19 00:11:35.493100345 +0000 UTC m=+125.329340324" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.534069 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.534347 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.034332045 +0000 UTC m=+125.870572034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.640801 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.641181 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.141160988 +0000 UTC m=+125.977401037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.651171 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-rgj5z" podStartSLOduration=103.651154166 podStartE2EDuration="1m43.651154166s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.649815538 +0000 UTC m=+125.486055537" watchObservedRunningTime="2026-02-19 00:11:35.651154166 +0000 UTC m=+125.487394155" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.677937 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.677981 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.704103 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.706492 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.708321 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.727422 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" podStartSLOduration=103.727402617 podStartE2EDuration="1m43.727402617s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.693950472 +0000 UTC m=+125.530190461" watchObservedRunningTime="2026-02-19 00:11:35.727402617 +0000 UTC m=+125.563642606" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.742928 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.743567 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.243547913 +0000 UTC m=+126.079787912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.845455 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.846219 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.346199495 +0000 UTC m=+126.182439544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.872585 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-kvzlc" podStartSLOduration=103.872560896 podStartE2EDuration="1m43.872560896s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.868051896 +0000 UTC m=+125.704291895" watchObservedRunningTime="2026-02-19 00:11:35.872560896 +0000 UTC m=+125.708800885" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.893759 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:35 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:35 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:35 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.893834 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.928865 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-pxg5n" podStartSLOduration=103.92884803 podStartE2EDuration="1m43.92884803s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.927994545 +0000 UTC m=+125.764234534" watchObservedRunningTime="2026-02-19 00:11:35.92884803 +0000 UTC m=+125.765088019" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.947263 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:35 crc kubenswrapper[5109]: E0219 00:11:35.947567 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.44755097 +0000 UTC m=+126.283790959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.959444 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" event={"ID":"a07a6721-367c-4f7a-b6a6-0266df632216","Type":"ContainerStarted","Data":"b8270223efb575bbf09fa584921479866c62e31ebf84436b23b8e09f9cce9a71"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.959493 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" event={"ID":"a07a6721-367c-4f7a-b6a6-0266df632216","Type":"ContainerStarted","Data":"2f1d521641fd658ded2723119c2ca7d0f91b2fd782faa069127a7d473d9fb5b7"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.963589 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" event={"ID":"d2e6c049-ef77-4bad-ab30-b499a7850c20","Type":"ContainerStarted","Data":"ef66e163e6163d476c6f68187298417e37b942f68a02b1951cdcfc0fc5191158"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.963666 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" event={"ID":"d2e6c049-ef77-4bad-ab30-b499a7850c20","Type":"ContainerStarted","Data":"420e17e502efa1cadb349eb45dc0eab54acd10540842c6e6d737235fd413a3d2"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.969315 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-l48tx" podStartSLOduration=7.9693021680000005 podStartE2EDuration="7.969302168s" podCreationTimestamp="2026-02-19 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:35.968176035 +0000 UTC m=+125.804416024" watchObservedRunningTime="2026-02-19 00:11:35.969302168 +0000 UTC m=+125.805542157" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.971768 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" event={"ID":"8e46cdbd-071c-446c-bee4-462001f9ef85","Type":"ContainerStarted","Data":"2ff24887e868782db2083bb6c869378d334149fe187e4bf5b76a4e74c11ca16f"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.971827 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" event={"ID":"8e46cdbd-071c-446c-bee4-462001f9ef85","Type":"ContainerStarted","Data":"c38069d51973d6792600315bee9a247d8ce888100711311f79b863c837e8fa4c"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.973456 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" event={"ID":"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6","Type":"ContainerStarted","Data":"05ef19038f3449577bff035ff956cb0958527425fb52d796318d1b53fe63dcfa"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.973495 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" event={"ID":"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6","Type":"ContainerStarted","Data":"f8485b032cf138a7501a2866c7c2aacb025a685a4f6c1d3721824eb7f6a1e6e5"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.975341 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" event={"ID":"dd92fdf2-3d74-4fac-af8c-c7fe7b025492","Type":"ContainerStarted","Data":"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.975385 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" event={"ID":"dd92fdf2-3d74-4fac-af8c-c7fe7b025492","Type":"ContainerStarted","Data":"e0bf08f408eb0008b939137485c837a009523fe04a20c4fb60a51e6049f7f4b6"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.975819 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.977434 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ddddh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.977487 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.978379 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" event={"ID":"baf9561a-4502-4e7e-b9af-acb69d721496","Type":"ContainerStarted","Data":"5d31d9a1597c418c1ca064e0d8cb38922bfa7c83e96b6f23fe6720ddba5239de"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.978425 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" event={"ID":"baf9561a-4502-4e7e-b9af-acb69d721496","Type":"ContainerStarted","Data":"f1f3e76a9cfd3679cf3fe25c91f38dd994bc72d27634fc06931df97c4c9cf1bb"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.979187 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.980672 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" event={"ID":"caabdbf4-9047-45d1-a1ae-84fee87393c9","Type":"ContainerStarted","Data":"23d203b8af0073da8d277eed0a7522a69b58b15c5207a67a6b5b2465928bf9ba"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.980698 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" event={"ID":"caabdbf4-9047-45d1-a1ae-84fee87393c9","Type":"ContainerStarted","Data":"41e8a45556402b4b298bf7d1cb926472bbb6c0b052927621cecd823c68f83623"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.980802 5109 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-8fkxh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.980843 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" podUID="baf9561a-4502-4e7e-b9af-acb69d721496" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.982571 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-zhjpv" event={"ID":"859e96d6-c432-4486-9efc-9e57147a0cdc","Type":"ContainerStarted","Data":"9fe25854795e74692b406e13dd3ebda549299994a9d6c40b08724dc7988f3f13"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.982594 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-zhjpv" event={"ID":"859e96d6-c432-4486-9efc-9e57147a0cdc","Type":"ContainerStarted","Data":"6bbc1fb40571a393cd279bd5291c3d375835c078cbe8ff9832ba84f7579ef828"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.988435 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" event={"ID":"ffa4ee7c-f211-40a7-ae2d-8996d8533102","Type":"ContainerStarted","Data":"9846b045d896d1e4e55c22c8e9187ba21f4375dd9db5c730483565eff61db0ef"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.988474 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" event={"ID":"ffa4ee7c-f211-40a7-ae2d-8996d8533102","Type":"ContainerStarted","Data":"fcd38203170aa4499149e96965c9e89aef9fa008a3f6a45ff583cc52b67eed4a"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.994054 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" event={"ID":"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a","Type":"ContainerStarted","Data":"417709df85decaf4ede4bb852a245f9cf4fee8537d46c7231b4d9ff9fcf03eaa"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.994102 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" event={"ID":"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a","Type":"ContainerStarted","Data":"1a3d922ecfad636f174da6d8b1fc214cdd776f6fc3aeec1436e51ff01d96f8fb"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.994120 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" event={"ID":"ebffcdcb-f67f-40e8-9c1a-296f0c5dad2a","Type":"ContainerStarted","Data":"d3a07139698278dbf5b88acb45ecb8ba7daf2efac91d73376079b2527cdfa87f"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.994524 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.999180 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pnlfz" event={"ID":"691b2ad9-f837-4d45-a2bb-b99130bad14f","Type":"ContainerStarted","Data":"f0cc83b4482baea7796d1cfc5b131aab1173a3c21803a5b99fcde00e4f7f5eea"} Feb 19 00:11:35 crc kubenswrapper[5109]: I0219 00:11:35.999232 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-pnlfz" event={"ID":"691b2ad9-f837-4d45-a2bb-b99130bad14f","Type":"ContainerStarted","Data":"20c21e16bff5fccc78ff6216d9bd82906c854c590b52bb1eac10668fe2fa4b26"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.008795 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" event={"ID":"8bf22cea-38f6-463c-97e7-b2a7feec536c","Type":"ContainerStarted","Data":"b11402323fbfabef6de544a77b2536a74de6c1617bdd80adea9d56500b75bc93"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.008851 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" event={"ID":"8bf22cea-38f6-463c-97e7-b2a7feec536c","Type":"ContainerStarted","Data":"d13a5e434eb617857bd7818281f5627f051addbe878c90ace353181644cae84e"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.008864 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" event={"ID":"8bf22cea-38f6-463c-97e7-b2a7feec536c","Type":"ContainerStarted","Data":"3bc4140065593639a588f621a68166cd9c4ca4dde3707641fee1c3434c64c489"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.013897 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" event={"ID":"3d7fffb6-c104-482f-8c6a-33b3dd961b62","Type":"ContainerStarted","Data":"0a945c7830e619b5d083665e589310da8a05ee16697cb57e33db91067214db52"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.013933 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" event={"ID":"3d7fffb6-c104-482f-8c6a-33b3dd961b62","Type":"ContainerStarted","Data":"8fa183808dedee5c54ff9dfb718a1f0cf1d7a1a3313b5dd710850efae4ae0ab7"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.014558 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.015903 5109 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ggz6s container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.015953 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" podUID="3d7fffb6-c104-482f-8c6a-33b3dd961b62" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.024134 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" event={"ID":"3bfc9251-3e6e-4a23-b109-44bf2f780c4d","Type":"ContainerStarted","Data":"79a9f837bceea5a1ab635e462961fbc72edd5d2c7f03f03961a512c25a260599"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.025082 5109 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-kk8zl container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.025130 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" podUID="3bfc9251-3e6e-4a23-b109-44bf2f780c4d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.028254 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" event={"ID":"315ba213-ba49-4ab6-8b38-e3abe28ee907","Type":"ContainerStarted","Data":"29ea071f14c441255330c18b6a9c0f97e81e5052c8d74ca56c45345ac6a954fd"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.028544 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" event={"ID":"315ba213-ba49-4ab6-8b38-e3abe28ee907","Type":"ContainerStarted","Data":"48d1c4c93e467f21957a7fa836ca45689d808e286b62925827b449b8a68cf1a5"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.031696 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trt7v" event={"ID":"d0b307e4-b2bd-4498-be5e-38320e2b1350","Type":"ContainerStarted","Data":"b766c2a89bb505457b412f311d1b2d7c1f0698e227e885a82262be643530a7d7"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.031745 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-trt7v" event={"ID":"d0b307e4-b2bd-4498-be5e-38320e2b1350","Type":"ContainerStarted","Data":"a81d1c66616d0f47e46108ebe7adaaa0edaab770aa5324d23b089446e1fe3580"} Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.033457 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.035158 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-rgj5z container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.035217 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-rgj5z" podUID="753c6b93-7309-452f-b10c-8aa1c730a48a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.037123 5109 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-tgx9p container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]log ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]etcd ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/generic-apiserver-start-informers ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/max-in-flight-filter ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/image.openshift.io-apiserver-caches ok Feb 19 00:11:36 crc kubenswrapper[5109]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Feb 19 00:11:36 crc kubenswrapper[5109]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/project.openshift.io-projectcache ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/openshift.io-startinformers ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/openshift.io-restmapperupdater ok Feb 19 00:11:36 crc kubenswrapper[5109]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 19 00:11:36 crc kubenswrapper[5109]: livez check failed Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.037177 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" podUID="c4130b11-7b60-4ee2-a12b-b498e2944738" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.043036 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-5hvvj" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.048602 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.049030 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.549015488 +0000 UTC m=+126.385255477 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.051133 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" podStartSLOduration=8.051115799 podStartE2EDuration="8.051115799s" podCreationTimestamp="2026-02-19 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.049995746 +0000 UTC m=+125.886235745" watchObservedRunningTime="2026-02-19 00:11:36.051115799 +0000 UTC m=+125.887355788" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.052054 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-4d9db" podStartSLOduration=104.052044905 podStartE2EDuration="1m44.052044905s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.008872219 +0000 UTC m=+125.845112208" watchObservedRunningTime="2026-02-19 00:11:36.052044905 +0000 UTC m=+125.888284894" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.069283 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.150054 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.150242 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.650192528 +0000 UTC m=+126.486432517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.154161 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.154520 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.654503702 +0000 UTC m=+126.490743691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.255822 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.256036 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.756011801 +0000 UTC m=+126.592251800 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.256386 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.256911 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.756900447 +0000 UTC m=+126.593140436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.278548 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37044: no serving certificate available for the kubelet" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.324256 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-zhjpv" podStartSLOduration=104.32421743 podStartE2EDuration="1m44.32421743s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.323097257 +0000 UTC m=+126.159337246" watchObservedRunningTime="2026-02-19 00:11:36.32421743 +0000 UTC m=+126.160457419" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.357733 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.357913 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.857887551 +0000 UTC m=+126.694127540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.358342 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.358717 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.858699985 +0000 UTC m=+126.694940014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.365028 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-trt7v" podStartSLOduration=8.365011027 podStartE2EDuration="8.365011027s" podCreationTimestamp="2026-02-19 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.364010158 +0000 UTC m=+126.200250177" watchObservedRunningTime="2026-02-19 00:11:36.365011027 +0000 UTC m=+126.201251016" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.407550 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-pnlfz" podStartSLOduration=8.407526054 podStartE2EDuration="8.407526054s" podCreationTimestamp="2026-02-19 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.40429262 +0000 UTC m=+126.240532609" watchObservedRunningTime="2026-02-19 00:11:36.407526054 +0000 UTC m=+126.243766043" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.446614 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-hfxtc" podStartSLOduration=104.446598461 podStartE2EDuration="1m44.446598461s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.445076187 +0000 UTC m=+126.281316196" watchObservedRunningTime="2026-02-19 00:11:36.446598461 +0000 UTC m=+126.282838450" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.459104 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.459717 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:36.959695749 +0000 UTC m=+126.795935738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.490500 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-slgm9" podStartSLOduration=104.490478338 podStartE2EDuration="1m44.490478338s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.489793918 +0000 UTC m=+126.326033907" watchObservedRunningTime="2026-02-19 00:11:36.490478338 +0000 UTC m=+126.326718337" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.560707 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.561158 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.061137247 +0000 UTC m=+126.897377266 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.574088 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-sqcqv" podStartSLOduration=104.57407363 podStartE2EDuration="1m44.57407363s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.56783528 +0000 UTC m=+126.404075269" watchObservedRunningTime="2026-02-19 00:11:36.57407363 +0000 UTC m=+126.410313609" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.579028 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.646238 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" podStartSLOduration=104.646220391 podStartE2EDuration="1m44.646220391s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.641374421 +0000 UTC m=+126.477614410" watchObservedRunningTime="2026-02-19 00:11:36.646220391 +0000 UTC m=+126.482460380" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.663389 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.663544 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.16351324 +0000 UTC m=+126.999753229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.663977 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.664559 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.16454828 +0000 UTC m=+127.000788279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.672394 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" podStartSLOduration=104.672372906 podStartE2EDuration="1m44.672372906s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.672123408 +0000 UTC m=+126.508363397" watchObservedRunningTime="2026-02-19 00:11:36.672372906 +0000 UTC m=+126.508612905" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.748096 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-rm9p5" podStartSLOduration=104.74808183 podStartE2EDuration="1m44.74808183s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.743268202 +0000 UTC m=+126.579508191" watchObservedRunningTime="2026-02-19 00:11:36.74808183 +0000 UTC m=+126.584321819" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.765377 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.765573 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.265541154 +0000 UTC m=+127.101781143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.766023 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.766383 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.266372428 +0000 UTC m=+127.102612417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.815986 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" podStartSLOduration=104.815965059 podStartE2EDuration="1m44.815965059s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.76989442 +0000 UTC m=+126.606134409" watchObservedRunningTime="2026-02-19 00:11:36.815965059 +0000 UTC m=+126.652205048" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.858789 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" podStartSLOduration=103.858774135 podStartE2EDuration="1m43.858774135s" podCreationTimestamp="2026-02-19 00:09:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.857070866 +0000 UTC m=+126.693310855" watchObservedRunningTime="2026-02-19 00:11:36.858774135 +0000 UTC m=+126.695014124" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.860302 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-gd89d" podStartSLOduration=104.860295379 podStartE2EDuration="1m44.860295379s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.816931967 +0000 UTC m=+126.653171946" watchObservedRunningTime="2026-02-19 00:11:36.860295379 +0000 UTC m=+126.696535368" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.867595 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.867781 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.367746944 +0000 UTC m=+127.203986943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.868091 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.868569 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.368558227 +0000 UTC m=+127.204798226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.893800 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:36 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:36 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:36 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.893865 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.933301 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" podStartSLOduration=104.933285045 podStartE2EDuration="1m44.933285045s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.898954754 +0000 UTC m=+126.735194763" watchObservedRunningTime="2026-02-19 00:11:36.933285045 +0000 UTC m=+126.769525034" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.969509 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.969675 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.469650164 +0000 UTC m=+127.305890163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.969846 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:36 crc kubenswrapper[5109]: E0219 00:11:36.970124 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.470116128 +0000 UTC m=+127.306356117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.974284 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-kwkd6" podStartSLOduration=104.974260747 podStartE2EDuration="1m44.974260747s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:36.936833217 +0000 UTC m=+126.773073216" watchObservedRunningTime="2026-02-19 00:11:36.974260747 +0000 UTC m=+126.810500746" Feb 19 00:11:36 crc kubenswrapper[5109]: I0219 00:11:36.976226 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tt7nq"] Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.044554 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whng8" event={"ID":"decc90f6-d956-4221-b02d-e2e28b9f307a","Type":"ContainerStarted","Data":"a26f495f9f5fb5d5cdb090bfabfd4deb7db40dfb5b4f231010c6de8d747423c1"} Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.047237 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" event={"ID":"37b7e6dc-12f7-4753-a22a-36fdc2abe7b6","Type":"ContainerStarted","Data":"e4a6f1d435e650c7710c90b367eac0ca93e307bb7c11868e1f2340460afa0d26"} Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.050672 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ddddh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.051099 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.060459 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-kk8zl" Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.070799 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.070972 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.570945198 +0000 UTC m=+127.407185187 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.071302 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6p97s" podStartSLOduration=105.071282647 podStartE2EDuration="1m45.071282647s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:37.069123995 +0000 UTC m=+126.905363974" watchObservedRunningTime="2026-02-19 00:11:37.071282647 +0000 UTC m=+126.907522636" Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.072086 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.073432 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-8fkxh" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.074918 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.574907592 +0000 UTC m=+127.411147581 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.173368 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.173872 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.673855437 +0000 UTC m=+127.510095416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.274843 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.275185 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.775166861 +0000 UTC m=+127.611406850 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.376419 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.376595 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.876567607 +0000 UTC m=+127.712807596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.376838 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.377225 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.877209166 +0000 UTC m=+127.713449155 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.477828 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.478019 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.977993384 +0000 UTC m=+127.814233373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.478442 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.478815 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:37.978801397 +0000 UTC m=+127.815041386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.580141 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.580289 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.080261785 +0000 UTC m=+127.916501774 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.580810 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.581127 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.08111452 +0000 UTC m=+127.917354509 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.682445 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.682618 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.182585448 +0000 UTC m=+128.018825447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.683083 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.683392 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.183378751 +0000 UTC m=+128.019618740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.783898 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.784050 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.284018685 +0000 UTC m=+128.120258674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.784488 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.784813 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.284801058 +0000 UTC m=+128.121041047 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.886412 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.886570 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.386537364 +0000 UTC m=+128.222777363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.886806 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.887339 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.387320936 +0000 UTC m=+128.223560925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.896213 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:37 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:37 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:37 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.896399 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.988695 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.988863 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.488839376 +0000 UTC m=+128.325079365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:37 crc kubenswrapper[5109]: I0219 00:11:37.989095 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:37 crc kubenswrapper[5109]: E0219 00:11:37.989421 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.489408303 +0000 UTC m=+128.325648292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.049455 5109 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-ggz6s container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.049511 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" podUID="3d7fffb6-c104-482f-8c6a-33b3dd961b62" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.051068 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whng8" event={"ID":"decc90f6-d956-4221-b02d-e2e28b9f307a","Type":"ContainerStarted","Data":"26418ab2bd22dca28b87c9b61bb881f6654399204353b8b5468511e34e8107de"} Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.051827 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" gracePeriod=30 Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.089615 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.091104 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.591087267 +0000 UTC m=+128.427327246 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.190977 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.191380 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.69135986 +0000 UTC m=+128.527599849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.204740 5109 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.291949 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.292098 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.792064616 +0000 UTC m=+128.628304605 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.292457 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.292774 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.792758656 +0000 UTC m=+128.628998645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.394333 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.394539 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.894509223 +0000 UTC m=+128.730749212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.394789 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.395065 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.895052519 +0000 UTC m=+128.731292508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.486613 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-ggz6s" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.495409 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.495646 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.99559869 +0000 UTC m=+128.831838679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.496153 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.496585 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:38.996570678 +0000 UTC m=+128.832810667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.582325 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.591664 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.593414 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.594992 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.597782 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.597886 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.097861851 +0000 UTC m=+128.934101840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.597938 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.598384 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.098366476 +0000 UTC m=+128.934606455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.675554 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.679388 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.680781 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.682024 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.687145 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699170 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.699341 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.199316879 +0000 UTC m=+129.035556858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699398 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhxg2\" (UniqueName: \"kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699491 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699593 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699709 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699888 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.699918 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.700005 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.199987808 +0000 UTC m=+129.036227877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.801145 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.801217 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.301199269 +0000 UTC m=+129.137439258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.801645 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.801877 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.801968 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vhxg2\" (UniqueName: \"kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.802032 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.802084 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.802389 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.802504 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.802546 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.302534238 +0000 UTC m=+129.138774227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.802652 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.803037 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.820433 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhxg2\" (UniqueName: \"kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2\") pod \"community-operators-xsg6d\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.820882 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.865438 5109 ???:1] "http: TLS handshake error from 192.168.126.11:37054: no serving certificate available for the kubelet" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.890299 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:38 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:38 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:38 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.890374 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.903915 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.904130 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.404098939 +0000 UTC m=+129.240338938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.904485 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:38 crc kubenswrapper[5109]: E0219 00:11:38.904989 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.404976054 +0000 UTC m=+129.241216043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.905074 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.962724 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.974725 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:38 crc kubenswrapper[5109]: I0219 00:11:38.975803 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.009682 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.010396 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.010480 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.010659 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx7g2\" (UniqueName: \"kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: E0219 00:11:39.010829 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.510813888 +0000 UTC m=+129.347053877 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.032899 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.084030 5109 generic.go:358] "Generic (PLEG): container finished" podID="315ba213-ba49-4ab6-8b38-e3abe28ee907" containerID="29ea071f14c441255330c18b6a9c0f97e81e5052c8d74ca56c45345ac6a954fd" exitCode=0 Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.084184 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" event={"ID":"315ba213-ba49-4ab6-8b38-e3abe28ee907","Type":"ContainerDied","Data":"29ea071f14c441255330c18b6a9c0f97e81e5052c8d74ca56c45345ac6a954fd"} Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.093221 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whng8" event={"ID":"decc90f6-d956-4221-b02d-e2e28b9f307a","Type":"ContainerStarted","Data":"e25a67c31b77c67b092a52d1abf017e2127b5523ca09f89698fabac2e8f5748c"} Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.093263 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-whng8" event={"ID":"decc90f6-d956-4221-b02d-e2e28b9f307a","Type":"ContainerStarted","Data":"e0874938a31cb111edc51b652e00f63b87c33fd18f17c3eb45ad952490014782"} Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.111984 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:39 crc kubenswrapper[5109]: E0219 00:11:39.112464 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-19 00:11:39.612442751 +0000 UTC m=+129.448682740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmk4g" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.112947 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dx7g2\" (UniqueName: \"kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.113107 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.113209 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.113794 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.114716 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.128034 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-whng8" podStartSLOduration=11.12801226 podStartE2EDuration="11.12801226s" podCreationTimestamp="2026-02-19 00:11:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:39.125140637 +0000 UTC m=+128.961380626" watchObservedRunningTime="2026-02-19 00:11:39.12801226 +0000 UTC m=+128.964252259" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.134753 5109 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-19T00:11:38.204768957Z","UUID":"feac74e7-fbc9-43d6-a7bf-2259a12bf695","Handler":null,"Name":"","Endpoint":""} Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.139141 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx7g2\" (UniqueName: \"kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2\") pod \"community-operators-w6z29\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.139496 5109 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.139531 5109 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.163533 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.184801 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.184964 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.189928 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.214138 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.214275 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.214376 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.214457 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95tfx\" (UniqueName: \"kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.233966 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.272605 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 19 00:11:39 crc kubenswrapper[5109]: W0219 00:11:39.277705 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podb0c65fd0_ff6b_4063_90a3_c538d2c30981.slice/crio-92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008 WatchSource:0}: Error finding container 92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008: Status 404 returned error can't find the container with id 92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008 Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.286996 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.315973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-95tfx\" (UniqueName: \"kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.316103 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.316194 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.317691 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.318983 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.319442 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.324270 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.324315 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.337449 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-95tfx\" (UniqueName: \"kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx\") pod \"certified-operators-8t8gx\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.357660 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.368658 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.377599 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.377756 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.386018 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmk4g\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.420052 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.420239 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffgf\" (UniqueName: \"kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.420325 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.504541 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:11:39 crc kubenswrapper[5109]: W0219 00:11:39.513343 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce53be8_f7e0_44e3_b218_4f5f6985821d.slice/crio-8b0debd03dfbb30dadcb681ba5db3b12b74ec129d01a594fac01f1dd8e7ec9d0 WatchSource:0}: Error finding container 8b0debd03dfbb30dadcb681ba5db3b12b74ec129d01a594fac01f1dd8e7ec9d0: Status 404 returned error can't find the container with id 8b0debd03dfbb30dadcb681ba5db3b12b74ec129d01a594fac01f1dd8e7ec9d0 Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.513733 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.521231 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.521364 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffgf\" (UniqueName: \"kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.521405 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.521983 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.522143 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.543368 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffgf\" (UniqueName: \"kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf\") pod \"certified-operators-lhxln\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.604778 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.605719 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.694209 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.764686 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.873780 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.890241 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:39 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:39 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:39 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:39 crc kubenswrapper[5109]: I0219 00:11:39.890280 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:39 crc kubenswrapper[5109]: W0219 00:11:39.923518 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf93c47a_3819_4073_82e5_8bb1c9e73432.slice/crio-43ada7017445eee7d68d2255c705ef7029c1bc37e74765d48ebf78e15a42d6ed WatchSource:0}: Error finding container 43ada7017445eee7d68d2255c705ef7029c1bc37e74765d48ebf78e15a42d6ed: Status 404 returned error can't find the container with id 43ada7017445eee7d68d2255c705ef7029c1bc37e74765d48ebf78e15a42d6ed Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.098112 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" event={"ID":"bf93c47a-3819-4073-82e5-8bb1c9e73432","Type":"ContainerStarted","Data":"bcab3ba9368fc474aaab0d1f5cab3431f543874abf597cf7f3d2c537a1bc4f2e"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.098185 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" event={"ID":"bf93c47a-3819-4073-82e5-8bb1c9e73432","Type":"ContainerStarted","Data":"43ada7017445eee7d68d2255c705ef7029c1bc37e74765d48ebf78e15a42d6ed"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.098308 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.099412 5109 generic.go:358] "Generic (PLEG): container finished" podID="43671b9e-b630-4d24-b0d0-67940647761e" containerID="06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48" exitCode=0 Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.099486 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerDied","Data":"06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.099548 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerStarted","Data":"4cbd020d08030ba595be2c79bef92d58f137de3069d4718693039d0e34f52fab"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.100946 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b0c65fd0-ff6b-4063-90a3-c538d2c30981","Type":"ContainerStarted","Data":"1335e230e350c093115f5d57becaf72612bef1f4f0e253e7bcd3bfc3c30b2564"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.100986 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b0c65fd0-ff6b-4063-90a3-c538d2c30981","Type":"ContainerStarted","Data":"92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.103759 5109 generic.go:358] "Generic (PLEG): container finished" podID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerID="d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36" exitCode=0 Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.103796 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerDied","Data":"d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.103836 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerStarted","Data":"a6825257268dcbd77fbd555ba6379754b45cb3ba980f7b3a8a295b6220d38087"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.108988 5109 generic.go:358] "Generic (PLEG): container finished" podID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerID="c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7" exitCode=0 Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.109253 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerDied","Data":"c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.109311 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerStarted","Data":"8b0debd03dfbb30dadcb681ba5db3b12b74ec129d01a594fac01f1dd8e7ec9d0"} Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.134207 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.136302 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" podStartSLOduration=108.136285577 podStartE2EDuration="1m48.136285577s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:40.131825048 +0000 UTC m=+129.968065047" watchObservedRunningTime="2026-02-19 00:11:40.136285577 +0000 UTC m=+129.972525566" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.212420 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.212405352 podStartE2EDuration="2.212405352s" podCreationTimestamp="2026-02-19 00:11:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:11:40.211955789 +0000 UTC m=+130.048195778" watchObservedRunningTime="2026-02-19 00:11:40.212405352 +0000 UTC m=+130.048645341" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.344967 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.433840 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume\") pod \"315ba213-ba49-4ab6-8b38-e3abe28ee907\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.433904 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") pod \"315ba213-ba49-4ab6-8b38-e3abe28ee907\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.433988 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume\") pod \"315ba213-ba49-4ab6-8b38-e3abe28ee907\" (UID: \"315ba213-ba49-4ab6-8b38-e3abe28ee907\") " Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.434734 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume" (OuterVolumeSpecName: "config-volume") pod "315ba213-ba49-4ab6-8b38-e3abe28ee907" (UID: "315ba213-ba49-4ab6-8b38-e3abe28ee907"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.440701 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt" (OuterVolumeSpecName: "kube-api-access-z4mqt") pod "315ba213-ba49-4ab6-8b38-e3abe28ee907" (UID: "315ba213-ba49-4ab6-8b38-e3abe28ee907"). InnerVolumeSpecName "kube-api-access-z4mqt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.441444 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "315ba213-ba49-4ab6-8b38-e3abe28ee907" (UID: "315ba213-ba49-4ab6-8b38-e3abe28ee907"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.535214 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4mqt\" (UniqueName: \"kubernetes.io/projected/315ba213-ba49-4ab6-8b38-e3abe28ee907-kube-api-access-z4mqt\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.535240 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/315ba213-ba49-4ab6-8b38-e3abe28ee907-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.535248 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/315ba213-ba49-4ab6-8b38-e3abe28ee907-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.709025 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.714466 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-tgx9p" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.888800 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:40 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:40 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:40 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.888867 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.965172 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.966928 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="315ba213-ba49-4ab6-8b38-e3abe28ee907" containerName="collect-profiles" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.966958 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="315ba213-ba49-4ab6-8b38-e3abe28ee907" containerName="collect-profiles" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.967119 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="315ba213-ba49-4ab6-8b38-e3abe28ee907" containerName="collect-profiles" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.975205 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.975399 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:40 crc kubenswrapper[5109]: I0219 00:11:40.986536 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.010905 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.044378 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.044431 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bsr\" (UniqueName: \"kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.044467 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.116209 5109 generic.go:358] "Generic (PLEG): container finished" podID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerID="2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631" exitCode=0 Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.116315 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerDied","Data":"2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631"} Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.116339 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerStarted","Data":"916d0af77839ed6028c16b09a32615bd4acdb05627b073cd0ee4fdea3ec49812"} Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.128884 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" event={"ID":"315ba213-ba49-4ab6-8b38-e3abe28ee907","Type":"ContainerDied","Data":"48d1c4c93e467f21957a7fa836ca45689d808e286b62925827b449b8a68cf1a5"} Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.128922 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48d1c4c93e467f21957a7fa836ca45689d808e286b62925827b449b8a68cf1a5" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.129021 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524320-r8sfn" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.145854 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.146108 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bsr\" (UniqueName: \"kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.146228 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.147596 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.148409 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.154004 5109 generic.go:358] "Generic (PLEG): container finished" podID="b0c65fd0-ff6b-4063-90a3-c538d2c30981" containerID="1335e230e350c093115f5d57becaf72612bef1f4f0e253e7bcd3bfc3c30b2564" exitCode=0 Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.155259 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b0c65fd0-ff6b-4063-90a3-c538d2c30981","Type":"ContainerDied","Data":"1335e230e350c093115f5d57becaf72612bef1f4f0e253e7bcd3bfc3c30b2564"} Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.172566 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bsr\" (UniqueName: \"kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr\") pod \"redhat-marketplace-jz24j\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.293246 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.363711 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.380478 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.380611 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.555337 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.555671 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.555759 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trzg7\" (UniqueName: \"kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.567688 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.660260 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.660311 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.660399 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-trzg7\" (UniqueName: \"kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.660838 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.660918 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.683268 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-trzg7\" (UniqueName: \"kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7\") pod \"redhat-marketplace-bmnjz\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.695312 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.893425 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.893682 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:41 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:41 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:41 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.893775 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:41 crc kubenswrapper[5109]: W0219 00:11:41.910221 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fb4b9d_4c0b_4367_a2e5_5c3031cccd2e.slice/crio-2a4cb88d1436cb0e61fc4ba51336f73acf6a1d7cea9b7ec9f57d4108aff8c960 WatchSource:0}: Error finding container 2a4cb88d1436cb0e61fc4ba51336f73acf6a1d7cea9b7ec9f57d4108aff8c960: Status 404 returned error can't find the container with id 2a4cb88d1436cb0e61fc4ba51336f73acf6a1d7cea9b7ec9f57d4108aff8c960 Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.969215 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.980366 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.980512 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:41 crc kubenswrapper[5109]: I0219 00:11:41.983433 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.068153 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.068255 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td46f\" (UniqueName: \"kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.068370 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.165063 5109 generic.go:358] "Generic (PLEG): container finished" podID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerID="e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618" exitCode=0 Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.165789 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerDied","Data":"e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618"} Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.166377 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerStarted","Data":"ff3419393eadae8278a8ad6cbf81a43e0a8b9900cb468aad9d42828e6759678b"} Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.169222 5109 generic.go:358] "Generic (PLEG): container finished" podID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerID="491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529" exitCode=0 Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.169985 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.170137 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerDied","Data":"491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529"} Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.170165 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerStarted","Data":"2a4cb88d1436cb0e61fc4ba51336f73acf6a1d7cea9b7ec9f57d4108aff8c960"} Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.170200 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-td46f\" (UniqueName: \"kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.170360 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.170855 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.171121 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.199392 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-td46f\" (UniqueName: \"kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f\") pod \"redhat-operators-jzxr2\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.272732 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.285715 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.290706 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.290760 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.310521 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.312490 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.352611 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.383493 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.384069 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b0c65fd0-ff6b-4063-90a3-c538d2c30981" containerName="pruner" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.384086 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0c65fd0-ff6b-4063-90a3-c538d2c30981" containerName="pruner" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.384189 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b0c65fd0-ff6b-4063-90a3-c538d2c30981" containerName="pruner" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.392193 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.392595 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.479393 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir\") pod \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.479984 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access\") pod \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\" (UID: \"b0c65fd0-ff6b-4063-90a3-c538d2c30981\") " Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.480465 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.480512 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.479394 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b0c65fd0-ff6b-4063-90a3-c538d2c30981" (UID: "b0c65fd0-ff6b-4063-90a3-c538d2c30981"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.480699 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.487404 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b0c65fd0-ff6b-4063-90a3-c538d2c30981" (UID: "b0c65fd0-ff6b-4063-90a3-c538d2c30981"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.531340 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:11:42 crc kubenswrapper[5109]: W0219 00:11:42.540207 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod733d45f4_d790_461d_b86e_51a69aeceeb7.slice/crio-366c95bd213d2ddb38b36bf2e2a71a54a5e6f479f6f075b7340381a0e6fe24ce WatchSource:0}: Error finding container 366c95bd213d2ddb38b36bf2e2a71a54a5e6f479f6f075b7340381a0e6fe24ce: Status 404 returned error can't find the container with id 366c95bd213d2ddb38b36bf2e2a71a54a5e6f479f6f075b7340381a0e6fe24ce Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.581482 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5fnp\" (UniqueName: \"kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.581541 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.581575 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.581688 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.581770 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.583798 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.584178 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b0c65fd0-ff6b-4063-90a3-c538d2c30981-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.601312 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.613561 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.685039 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f5fnp\" (UniqueName: \"kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.685183 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.685333 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.685755 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.685852 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.704512 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-v8z7c" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.705829 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.709560 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5fnp\" (UniqueName: \"kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp\") pod \"redhat-operators-j85xw\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.709787 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.891006 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.899003 5109 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-58zqj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 19 00:11:42 crc kubenswrapper[5109]: [-]has-synced failed: reason withheld Feb 19 00:11:42 crc kubenswrapper[5109]: [+]process-running ok Feb 19 00:11:42 crc kubenswrapper[5109]: healthz check failed Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.899072 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" podUID="d90a5916-ed50-483f-84e3-ec9e44da92f5" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 19 00:11:42 crc kubenswrapper[5109]: I0219 00:11:42.938195 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.004709 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.004757 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.004928 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.006600 5109 patch_prober.go:28] interesting pod/console-64d44f6ddf-4d9db container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.41:8443/health\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.006675 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-4d9db" podUID="7169447a-e4aa-4492-99f3-0d21fe813f69" containerName="console" probeResult="failure" output="Get \"https://10.217.0.41:8443/health\": dial tcp 10.217.0.41:8443: connect: connection refused" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.020076 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-rgj5z container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.020420 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-rgj5z" podUID="753c6b93-7309-452f-b10c-8aa1c730a48a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 19 00:11:43 crc kubenswrapper[5109]: W0219 00:11:43.026951 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bba1daa_2b6b_477c_b556_9ddcdfa319c3.slice/crio-f8c87009a4d5cd1a344abc76a379c1bb86bd531aaf2398507641310e140a283e WatchSource:0}: Error finding container f8c87009a4d5cd1a344abc76a379c1bb86bd531aaf2398507641310e140a283e: Status 404 returned error can't find the container with id f8c87009a4d5cd1a344abc76a379c1bb86bd531aaf2398507641310e140a283e Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.178612 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerStarted","Data":"f8c87009a4d5cd1a344abc76a379c1bb86bd531aaf2398507641310e140a283e"} Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.180369 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"69eb8481-9e77-4388-841d-852bdf327d9c","Type":"ContainerStarted","Data":"4bca808f4a501314bac6e86c623ff10ee21345f750f04fff4a6f04b5221fe1d7"} Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.182376 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.182238 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"b0c65fd0-ff6b-4063-90a3-c538d2c30981","Type":"ContainerDied","Data":"92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008"} Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.183419 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92c74efef22bbdfbe01e37ace58295c736285b3125e4a29b1d52fa967bc77008" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.189990 5109 generic.go:358] "Generic (PLEG): container finished" podID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerID="22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1" exitCode=0 Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.190032 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerDied","Data":"22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1"} Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.190113 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerStarted","Data":"366c95bd213d2ddb38b36bf2e2a71a54a5e6f479f6f075b7340381a0e6fe24ce"} Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.890860 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:43 crc kubenswrapper[5109]: I0219 00:11:43.898167 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-58zqj" Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.031828 5109 ???:1] "http: TLS handshake error from 192.168.126.11:48676: no serving certificate available for the kubelet" Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.061618 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-trt7v" Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.205120 5109 generic.go:358] "Generic (PLEG): container finished" podID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerID="8b14255675ed93908dd5bf2e337ad3be249a32d035f6a3d9c3a6424a5df25a50" exitCode=0 Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.205234 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerDied","Data":"8b14255675ed93908dd5bf2e337ad3be249a32d035f6a3d9c3a6424a5df25a50"} Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.225788 5109 generic.go:358] "Generic (PLEG): container finished" podID="69eb8481-9e77-4388-841d-852bdf327d9c" containerID="08dd96a87d22e55e4f01fcb4ad5a191eac2d705ec1b862ae15a67afcf385113b" exitCode=0 Feb 19 00:11:44 crc kubenswrapper[5109]: I0219 00:11:44.226013 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"69eb8481-9e77-4388-841d-852bdf327d9c","Type":"ContainerDied","Data":"08dd96a87d22e55e4f01fcb4ad5a191eac2d705ec1b862ae15a67afcf385113b"} Feb 19 00:11:46 crc kubenswrapper[5109]: I0219 00:11:46.035174 5109 patch_prober.go:28] interesting pod/downloads-747b44746d-rgj5z container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Feb 19 00:11:46 crc kubenswrapper[5109]: I0219 00:11:46.035823 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-rgj5z" podUID="753c6b93-7309-452f-b10c-8aa1c730a48a" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Feb 19 00:11:46 crc kubenswrapper[5109]: E0219 00:11:46.037464 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5109]: E0219 00:11:46.039402 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5109]: E0219 00:11:46.042248 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:46 crc kubenswrapper[5109]: E0219 00:11:46.042361 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:11:46 crc kubenswrapper[5109]: I0219 00:11:46.996747 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.052993 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.069747 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access\") pod \"69eb8481-9e77-4388-841d-852bdf327d9c\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.069837 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir\") pod \"69eb8481-9e77-4388-841d-852bdf327d9c\" (UID: \"69eb8481-9e77-4388-841d-852bdf327d9c\") " Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.070720 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "69eb8481-9e77-4388-841d-852bdf327d9c" (UID: "69eb8481-9e77-4388-841d-852bdf327d9c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.071006 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69eb8481-9e77-4388-841d-852bdf327d9c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.084547 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "69eb8481-9e77-4388-841d-852bdf327d9c" (UID: "69eb8481-9e77-4388-841d-852bdf327d9c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.173591 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69eb8481-9e77-4388-841d-852bdf327d9c-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.252561 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"69eb8481-9e77-4388-841d-852bdf327d9c","Type":"ContainerDied","Data":"4bca808f4a501314bac6e86c623ff10ee21345f750f04fff4a6f04b5221fe1d7"} Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.252605 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bca808f4a501314bac6e86c623ff10ee21345f750f04fff4a6f04b5221fe1d7" Feb 19 00:11:47 crc kubenswrapper[5109]: I0219 00:11:47.253086 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 19 00:11:51 crc kubenswrapper[5109]: I0219 00:11:51.950823 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:51 crc kubenswrapper[5109]: I0219 00:11:51.951135 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:51 crc kubenswrapper[5109]: I0219 00:11:51.958170 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:51 crc kubenswrapper[5109]: I0219 00:11:51.985863 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.052777 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.052845 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.052902 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.058104 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc-metrics-certs\") pod \"network-metrics-daemon-scmsj\" (UID: \"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc\") " pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.061710 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.063866 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.158465 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.167296 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-scmsj" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.217248 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 19 00:11:52 crc kubenswrapper[5109]: I0219 00:11:52.231596 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:53 crc kubenswrapper[5109]: I0219 00:11:53.011379 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:53 crc kubenswrapper[5109]: I0219 00:11:53.016719 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-4d9db" Feb 19 00:11:54 crc kubenswrapper[5109]: I0219 00:11:54.308583 5109 ???:1] "http: TLS handshake error from 192.168.126.11:45592: no serving certificate available for the kubelet" Feb 19 00:11:54 crc kubenswrapper[5109]: I0219 00:11:54.574231 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:11:56 crc kubenswrapper[5109]: E0219 00:11:56.036293 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:56 crc kubenswrapper[5109]: E0219 00:11:56.038395 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:56 crc kubenswrapper[5109]: E0219 00:11:56.040399 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:11:56 crc kubenswrapper[5109]: E0219 00:11:56.040463 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:11:56 crc kubenswrapper[5109]: I0219 00:11:56.051851 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-rgj5z" Feb 19 00:11:56 crc kubenswrapper[5109]: I0219 00:11:56.951546 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-scmsj"] Feb 19 00:11:56 crc kubenswrapper[5109]: W0219 00:11:56.968860 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-9ca4d926503616de437624b9cd02d070fd1aa0f8b64578a3b7425335630b7d58 WatchSource:0}: Error finding container 9ca4d926503616de437624b9cd02d070fd1aa0f8b64578a3b7425335630b7d58: Status 404 returned error can't find the container with id 9ca4d926503616de437624b9cd02d070fd1aa0f8b64578a3b7425335630b7d58 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.331369 5109 generic.go:358] "Generic (PLEG): container finished" podID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerID="0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.331468 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerDied","Data":"0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.333704 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"1ab1a35244e571c252d607bce60d4ca7ea5cbf204e01f1e2441b7a49e8b1c62c"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.336543 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-scmsj" event={"ID":"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc","Type":"ContainerStarted","Data":"b28d5dcf8ec6ec9dcf0af43bba665895ffd0d7d8a2d3b65ee08818ef7834808a"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.338191 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"8c81c38277104cf5730be09547633772ef3a21af350a651eec26e40ffa00fe6b"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.340109 5109 generic.go:358] "Generic (PLEG): container finished" podID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerID="0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.340214 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerDied","Data":"0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.342220 5109 generic.go:358] "Generic (PLEG): container finished" podID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerID="82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.342339 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerDied","Data":"82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.345561 5109 generic.go:358] "Generic (PLEG): container finished" podID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerID="1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.345691 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerDied","Data":"1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.350771 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerStarted","Data":"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.354128 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerStarted","Data":"8cc3e2dbfe0e00cb2ed0efbd9544e04a3cfdf498ee5cd412d10b897aa0669c5d"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.357552 5109 generic.go:358] "Generic (PLEG): container finished" podID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerID="4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.357705 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerDied","Data":"4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.363073 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"9ca4d926503616de437624b9cd02d070fd1aa0f8b64578a3b7425335630b7d58"} Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.367917 5109 generic.go:358] "Generic (PLEG): container finished" podID="43671b9e-b630-4d24-b0d0-67940647761e" containerID="13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7" exitCode=0 Feb 19 00:11:57 crc kubenswrapper[5109]: I0219 00:11:57.368052 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerDied","Data":"13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.385078 5109 generic.go:358] "Generic (PLEG): container finished" podID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerID="8cc3e2dbfe0e00cb2ed0efbd9544e04a3cfdf498ee5cd412d10b897aa0669c5d" exitCode=0 Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.385144 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerDied","Data":"8cc3e2dbfe0e00cb2ed0efbd9544e04a3cfdf498ee5cd412d10b897aa0669c5d"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.388562 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"52efa87741605a77e3538cc49fe5591fbda14e1ecf7a151090862fe8689f3246"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.395514 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerStarted","Data":"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.399432 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"734e9c92acc758fb85357423b76013d326a40fe5393b51ab309b93bd85a2f68d"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.402313 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-scmsj" event={"ID":"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc","Type":"ContainerStarted","Data":"ecfa30231dde336c1fe3aca4909f407454e976f97f8af0283681fa66c005538c"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.405675 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"8f33292fb68373cc6ae07f2775923af487ffce13b72604264591322f99956cef"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.411781 5109 generic.go:358] "Generic (PLEG): container finished" podID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerID="af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de" exitCode=0 Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.411848 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerDied","Data":"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de"} Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.615729 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8t8gx" podStartSLOduration=3.291895333 podStartE2EDuration="19.615711443s" podCreationTimestamp="2026-02-19 00:11:39 +0000 UTC" firstStartedPulling="2026-02-19 00:11:40.100223056 +0000 UTC m=+129.936463045" lastFinishedPulling="2026-02-19 00:11:56.424039166 +0000 UTC m=+146.260279155" observedRunningTime="2026-02-19 00:11:58.615359152 +0000 UTC m=+148.451599161" watchObservedRunningTime="2026-02-19 00:11:58.615711443 +0000 UTC m=+148.451951432" Feb 19 00:11:58 crc kubenswrapper[5109]: I0219 00:11:58.918066 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:11:59 crc kubenswrapper[5109]: I0219 00:11:59.070021 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:11:59 crc kubenswrapper[5109]: I0219 00:11:59.070255 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" containerID="cri-o://681436cc0af4d6ac2a715c58a7929773fcb13218e288b4536ee0a2468ba28be2" gracePeriod=30 Feb 19 00:11:59 crc kubenswrapper[5109]: I0219 00:11:59.085170 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:11:59 crc kubenswrapper[5109]: I0219 00:11:59.085451 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerName="route-controller-manager" containerID="cri-o://2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17" gracePeriod=30 Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.319777 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.320079 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.404844 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerStarted","Data":"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241"} Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.416420 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerStarted","Data":"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b"} Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.419382 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerStarted","Data":"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a"} Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.424149 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerStarted","Data":"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55"} Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.426106 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w6z29" podStartSLOduration=6.108066933 podStartE2EDuration="22.426094703s" podCreationTimestamp="2026-02-19 00:11:38 +0000 UTC" firstStartedPulling="2026-02-19 00:11:40.110489802 +0000 UTC m=+129.946729791" lastFinishedPulling="2026-02-19 00:11:56.428517572 +0000 UTC m=+146.264757561" observedRunningTime="2026-02-19 00:12:00.425039111 +0000 UTC m=+150.261279140" watchObservedRunningTime="2026-02-19 00:12:00.426094703 +0000 UTC m=+150.262334692" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.440971 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lhxln" podStartSLOduration=6.130757509 podStartE2EDuration="21.440954715s" podCreationTimestamp="2026-02-19 00:11:39 +0000 UTC" firstStartedPulling="2026-02-19 00:11:41.117761529 +0000 UTC m=+130.954001518" lastFinishedPulling="2026-02-19 00:11:56.427958735 +0000 UTC m=+146.264198724" observedRunningTime="2026-02-19 00:12:00.440100759 +0000 UTC m=+150.276340748" watchObservedRunningTime="2026-02-19 00:12:00.440954715 +0000 UTC m=+150.277194704" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.461836 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jz24j" podStartSLOduration=6.202259672 podStartE2EDuration="20.46181339s" podCreationTimestamp="2026-02-19 00:11:40 +0000 UTC" firstStartedPulling="2026-02-19 00:11:42.169367396 +0000 UTC m=+132.005607385" lastFinishedPulling="2026-02-19 00:11:56.428921104 +0000 UTC m=+146.265161103" observedRunningTime="2026-02-19 00:12:00.4585345 +0000 UTC m=+150.294774489" watchObservedRunningTime="2026-02-19 00:12:00.46181339 +0000 UTC m=+150.298053419" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.766266 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.784618 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xsg6d" podStartSLOduration=6.46006755 podStartE2EDuration="22.784599168s" podCreationTimestamp="2026-02-19 00:11:38 +0000 UTC" firstStartedPulling="2026-02-19 00:11:40.10450344 +0000 UTC m=+129.940743429" lastFinishedPulling="2026-02-19 00:11:56.429035018 +0000 UTC m=+146.265275047" observedRunningTime="2026-02-19 00:12:00.478065244 +0000 UTC m=+150.314305243" watchObservedRunningTime="2026-02-19 00:12:00.784599168 +0000 UTC m=+150.620839157" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791066 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791812 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="69eb8481-9e77-4388-841d-852bdf327d9c" containerName="pruner" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791830 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="69eb8481-9e77-4388-841d-852bdf327d9c" containerName="pruner" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791845 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerName="route-controller-manager" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791852 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerName="route-controller-manager" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791983 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerName="route-controller-manager" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.791995 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="69eb8481-9e77-4388-841d-852bdf327d9c" containerName="pruner" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.926880 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config\") pod \"34503362-be2b-40ee-be2f-cdf7da7baa6f\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.926930 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca\") pod \"34503362-be2b-40ee-be2f-cdf7da7baa6f\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.926991 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert\") pod \"34503362-be2b-40ee-be2f-cdf7da7baa6f\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.927089 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v679z\" (UniqueName: \"kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z\") pod \"34503362-be2b-40ee-be2f-cdf7da7baa6f\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.927133 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp\") pod \"34503362-be2b-40ee-be2f-cdf7da7baa6f\" (UID: \"34503362-be2b-40ee-be2f-cdf7da7baa6f\") " Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.927751 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp" (OuterVolumeSpecName: "tmp") pod "34503362-be2b-40ee-be2f-cdf7da7baa6f" (UID: "34503362-be2b-40ee-be2f-cdf7da7baa6f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.928016 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "34503362-be2b-40ee-be2f-cdf7da7baa6f" (UID: "34503362-be2b-40ee-be2f-cdf7da7baa6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.928126 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config" (OuterVolumeSpecName: "config") pod "34503362-be2b-40ee-be2f-cdf7da7baa6f" (UID: "34503362-be2b-40ee-be2f-cdf7da7baa6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.929068 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.929199 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.944328 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "34503362-be2b-40ee-be2f-cdf7da7baa6f" (UID: "34503362-be2b-40ee-be2f-cdf7da7baa6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:00 crc kubenswrapper[5109]: I0219 00:12:00.944475 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z" (OuterVolumeSpecName: "kube-api-access-v679z") pod "34503362-be2b-40ee-be2f-cdf7da7baa6f" (UID: "34503362-be2b-40ee-be2f-cdf7da7baa6f"). InnerVolumeSpecName "kube-api-access-v679z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.028546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jv7p\" (UniqueName: \"kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.028604 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.028655 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.028914 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.028998 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.029128 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.029150 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/34503362-be2b-40ee-be2f-cdf7da7baa6f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.029165 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/34503362-be2b-40ee-be2f-cdf7da7baa6f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.029177 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v679z\" (UniqueName: \"kubernetes.io/projected/34503362-be2b-40ee-be2f-cdf7da7baa6f-kube-api-access-v679z\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.029188 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/34503362-be2b-40ee-be2f-cdf7da7baa6f-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.129855 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jv7p\" (UniqueName: \"kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.129902 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.129926 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.129982 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.130007 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.130600 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.130913 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.131099 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.135566 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.148353 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jv7p\" (UniqueName: \"kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p\") pod \"route-controller-manager-5fc9bb6544-cxvhz\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.163116 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.273841 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.294442 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.294507 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.435161 5109 generic.go:358] "Generic (PLEG): container finished" podID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerID="681436cc0af4d6ac2a715c58a7929773fcb13218e288b4536ee0a2468ba28be2" exitCode=0 Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.435245 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" event={"ID":"78decf6c-6b41-4e23-ae33-af1fc7cab261","Type":"ContainerDied","Data":"681436cc0af4d6ac2a715c58a7929773fcb13218e288b4536ee0a2468ba28be2"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.436618 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-scmsj" event={"ID":"4f8f9c49-a32e-44f5-8230-37bdcbf0a0bc","Type":"ContainerStarted","Data":"94bdf5c9c17d31fc6a7b8bdc4b4e9840b4491f45419eb249795a13eb70f7752b"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.448058 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerStarted","Data":"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.451314 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerStarted","Data":"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.453549 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerStarted","Data":"0b722142d431af6b73d301f9c5c545dc4b3f3b4ad7de72c58e77f63bc5de2753"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.455952 5109 generic.go:358] "Generic (PLEG): container finished" podID="34503362-be2b-40ee-be2f-cdf7da7baa6f" containerID="2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17" exitCode=0 Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.456535 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.459445 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" event={"ID":"34503362-be2b-40ee-be2f-cdf7da7baa6f","Type":"ContainerDied","Data":"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.459516 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh" event={"ID":"34503362-be2b-40ee-be2f-cdf7da7baa6f","Type":"ContainerDied","Data":"a5dad433e423334f1740ba0b8db0c842746df176fbd48179e8b79ab1ac8cc23e"} Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.459541 5109 scope.go:117] "RemoveContainer" containerID="2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.462597 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-scmsj" podStartSLOduration=129.462578042 podStartE2EDuration="2m9.462578042s" podCreationTimestamp="2026-02-19 00:09:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:01.462074526 +0000 UTC m=+151.298314515" watchObservedRunningTime="2026-02-19 00:12:01.462578042 +0000 UTC m=+151.298818031" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.501868 5109 scope.go:117] "RemoveContainer" containerID="2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17" Feb 19 00:12:01 crc kubenswrapper[5109]: E0219 00:12:01.503088 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17\": container with ID starting with 2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17 not found: ID does not exist" containerID="2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.503135 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17"} err="failed to get container status \"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17\": rpc error: code = NotFound desc = could not find container \"2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17\": container with ID starting with 2d82290e232ee6cea2592f38214c740720b9ae9ac1a4c937fddbc4f5bc7f7e17 not found: ID does not exist" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.521594 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jzxr2" podStartSLOduration=7.247140204 podStartE2EDuration="20.521580996s" podCreationTimestamp="2026-02-19 00:11:41 +0000 UTC" firstStartedPulling="2026-02-19 00:11:43.190956912 +0000 UTC m=+133.027196901" lastFinishedPulling="2026-02-19 00:11:56.465397694 +0000 UTC m=+146.301637693" observedRunningTime="2026-02-19 00:12:01.51937963 +0000 UTC m=+151.355619619" watchObservedRunningTime="2026-02-19 00:12:01.521580996 +0000 UTC m=+151.357820985" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.522564 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bmnjz" podStartSLOduration=6.235445398 podStartE2EDuration="20.522557956s" podCreationTimestamp="2026-02-19 00:11:41 +0000 UTC" firstStartedPulling="2026-02-19 00:11:42.170841059 +0000 UTC m=+132.007081048" lastFinishedPulling="2026-02-19 00:11:56.457953567 +0000 UTC m=+146.294193606" observedRunningTime="2026-02-19 00:12:01.490986686 +0000 UTC m=+151.327226675" watchObservedRunningTime="2026-02-19 00:12:01.522557956 +0000 UTC m=+151.358797935" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.540296 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.551695 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-56tjh"] Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.551961 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.557332 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j85xw" podStartSLOduration=7.193201432 podStartE2EDuration="19.557319814s" podCreationTimestamp="2026-02-19 00:11:42 +0000 UTC" firstStartedPulling="2026-02-19 00:11:44.206030828 +0000 UTC m=+134.042270817" lastFinishedPulling="2026-02-19 00:11:56.57014921 +0000 UTC m=+146.406389199" observedRunningTime="2026-02-19 00:12:01.556842199 +0000 UTC m=+151.393082188" watchObservedRunningTime="2026-02-19 00:12:01.557319814 +0000 UTC m=+151.393559803" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.647317 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8t8gx" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="registry-server" probeResult="failure" output=< Feb 19 00:12:01 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 19 00:12:01 crc kubenswrapper[5109]: > Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.695687 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.696797 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.866410 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.895425 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.896131 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.896144 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" Feb 19 00:12:01 crc kubenswrapper[5109]: I0219 00:12:01.896241 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" containerName="controller-manager" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.046742 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047060 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047178 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047225 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhvs9\" (UniqueName: \"kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047250 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047280 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles\") pod \"78decf6c-6b41-4e23-ae33-af1fc7cab261\" (UID: \"78decf6c-6b41-4e23-ae33-af1fc7cab261\") " Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047347 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca" (OuterVolumeSpecName: "client-ca") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047509 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp" (OuterVolumeSpecName: "tmp") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047719 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/78decf6c-6b41-4e23-ae33-af1fc7cab261-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047736 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.047760 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.048090 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config" (OuterVolumeSpecName: "config") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.056874 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9" (OuterVolumeSpecName: "kube-api-access-qhvs9") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "kube-api-access-qhvs9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.057423 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78decf6c-6b41-4e23-ae33-af1fc7cab261" (UID: "78decf6c-6b41-4e23-ae33-af1fc7cab261"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.076354 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.076562 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.148458 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78decf6c-6b41-4e23-ae33-af1fc7cab261-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.148519 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.148530 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qhvs9\" (UniqueName: \"kubernetes.io/projected/78decf6c-6b41-4e23-ae33-af1fc7cab261-kube-api-access-qhvs9\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.148544 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78decf6c-6b41-4e23-ae33-af1fc7cab261-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249677 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249747 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6xb6\" (UniqueName: \"kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249777 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249809 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249881 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.249900 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.313691 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.314849 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.336967 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-jz24j" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="registry-server" probeResult="failure" output=< Feb 19 00:12:02 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 19 00:12:02 crc kubenswrapper[5109]: > Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351075 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d6xb6\" (UniqueName: \"kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351320 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351544 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351753 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351779 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.351895 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.352174 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.353061 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.353073 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.353315 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.356568 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.370052 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6xb6\" (UniqueName: \"kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6\") pod \"controller-manager-fcd865f45-vmg6m\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.388981 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.462127 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" event={"ID":"2c09a8ac-74ba-40e1-8a28-f19e961ec0db","Type":"ContainerStarted","Data":"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89"} Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.462178 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" event={"ID":"2c09a8ac-74ba-40e1-8a28-f19e961ec0db","Type":"ContainerStarted","Data":"e7ef927b16ece20212d293c7fd5475ba5841c56e979a38c9420a3d0f95b16d35"} Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.462408 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.472271 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" event={"ID":"78decf6c-6b41-4e23-ae33-af1fc7cab261","Type":"ContainerDied","Data":"1f59c360eaa12d095d8a828a5d985de328535cb20baeed029758b61c2670000d"} Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.472355 5109 scope.go:117] "RemoveContainer" containerID="681436cc0af4d6ac2a715c58a7929773fcb13218e288b4536ee0a2468ba28be2" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.472285 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-mxvtz" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.499848 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" podStartSLOduration=3.499829273 podStartE2EDuration="3.499829273s" podCreationTimestamp="2026-02-19 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:02.497293506 +0000 UTC m=+152.333533495" watchObservedRunningTime="2026-02-19 00:12:02.499829273 +0000 UTC m=+152.336069272" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.520109 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.523892 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-mxvtz"] Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.654470 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:02 crc kubenswrapper[5109]: W0219 00:12:02.664829 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0db50a50_d813_48a8_b407_38ef972ea7ae.slice/crio-82508f15de8e2e16e6d29452d9e01e42ddd7462aebdacbff4641b4c09b4626ec WatchSource:0}: Error finding container 82508f15de8e2e16e6d29452d9e01e42ddd7462aebdacbff4641b4c09b4626ec: Status 404 returned error can't find the container with id 82508f15de8e2e16e6d29452d9e01e42ddd7462aebdacbff4641b4c09b4626ec Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.687712 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.710382 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.710439 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.746802 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bmnjz" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="registry-server" probeResult="failure" output=< Feb 19 00:12:02 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 19 00:12:02 crc kubenswrapper[5109]: > Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.997740 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34503362-be2b-40ee-be2f-cdf7da7baa6f" path="/var/lib/kubelet/pods/34503362-be2b-40ee-be2f-cdf7da7baa6f/volumes" Feb 19 00:12:02 crc kubenswrapper[5109]: I0219 00:12:02.998493 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78decf6c-6b41-4e23-ae33-af1fc7cab261" path="/var/lib/kubelet/pods/78decf6c-6b41-4e23-ae33-af1fc7cab261/volumes" Feb 19 00:12:03 crc kubenswrapper[5109]: I0219 00:12:03.363690 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jzxr2" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="registry-server" probeResult="failure" output=< Feb 19 00:12:03 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 19 00:12:03 crc kubenswrapper[5109]: > Feb 19 00:12:03 crc kubenswrapper[5109]: I0219 00:12:03.493254 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" event={"ID":"0db50a50-d813-48a8-b407-38ef972ea7ae","Type":"ContainerStarted","Data":"3ec8664fc9d645ae52086164179c00a2e32c036659d5b497658686ce2b404b88"} Feb 19 00:12:03 crc kubenswrapper[5109]: I0219 00:12:03.493346 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" event={"ID":"0db50a50-d813-48a8-b407-38ef972ea7ae","Type":"ContainerStarted","Data":"82508f15de8e2e16e6d29452d9e01e42ddd7462aebdacbff4641b4c09b4626ec"} Feb 19 00:12:03 crc kubenswrapper[5109]: I0219 00:12:03.772690 5109 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j85xw" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="registry-server" probeResult="failure" output=< Feb 19 00:12:03 crc kubenswrapper[5109]: timeout: failed to connect service ":50051" within 1s Feb 19 00:12:03 crc kubenswrapper[5109]: > Feb 19 00:12:04 crc kubenswrapper[5109]: I0219 00:12:04.502011 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:04 crc kubenswrapper[5109]: I0219 00:12:04.508178 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:04 crc kubenswrapper[5109]: I0219 00:12:04.523407 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" podStartSLOduration=5.523389437 podStartE2EDuration="5.523389437s" podCreationTimestamp="2026-02-19 00:11:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:04.5211825 +0000 UTC m=+154.357422489" watchObservedRunningTime="2026-02-19 00:12:04.523389437 +0000 UTC m=+154.359629426" Feb 19 00:12:05 crc kubenswrapper[5109]: I0219 00:12:05.507707 5109 generic.go:358] "Generic (PLEG): container finished" podID="46cb4d4a-e24c-4036-8369-78813ade70e6" containerID="0417d75210cbff694af95a4c921c670c929487a885abaad04f691988fabbfe10" exitCode=0 Feb 19 00:12:05 crc kubenswrapper[5109]: I0219 00:12:05.508032 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-lgkhz" event={"ID":"46cb4d4a-e24c-4036-8369-78813ade70e6","Type":"ContainerDied","Data":"0417d75210cbff694af95a4c921c670c929487a885abaad04f691988fabbfe10"} Feb 19 00:12:06 crc kubenswrapper[5109]: E0219 00:12:06.035723 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:12:06 crc kubenswrapper[5109]: E0219 00:12:06.037196 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:12:06 crc kubenswrapper[5109]: E0219 00:12:06.038536 5109 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 19 00:12:06 crc kubenswrapper[5109]: E0219 00:12:06.038568 5109 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.722380 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.814337 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmbb7\" (UniqueName: \"kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7\") pod \"46cb4d4a-e24c-4036-8369-78813ade70e6\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.814469 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca\") pod \"46cb4d4a-e24c-4036-8369-78813ade70e6\" (UID: \"46cb4d4a-e24c-4036-8369-78813ade70e6\") " Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.814923 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca" (OuterVolumeSpecName: "serviceca") pod "46cb4d4a-e24c-4036-8369-78813ade70e6" (UID: "46cb4d4a-e24c-4036-8369-78813ade70e6"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.815139 5109 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/46cb4d4a-e24c-4036-8369-78813ade70e6-serviceca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.822131 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7" (OuterVolumeSpecName: "kube-api-access-lmbb7") pod "46cb4d4a-e24c-4036-8369-78813ade70e6" (UID: "46cb4d4a-e24c-4036-8369-78813ade70e6"). InnerVolumeSpecName "kube-api-access-lmbb7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:06 crc kubenswrapper[5109]: I0219 00:12:06.915801 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lmbb7\" (UniqueName: \"kubernetes.io/projected/46cb4d4a-e24c-4036-8369-78813ade70e6-kube-api-access-lmbb7\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:07 crc kubenswrapper[5109]: I0219 00:12:07.054271 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-rzdqn" Feb 19 00:12:07 crc kubenswrapper[5109]: I0219 00:12:07.522803 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29524320-lgkhz" event={"ID":"46cb4d4a-e24c-4036-8369-78813ade70e6","Type":"ContainerDied","Data":"80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0"} Feb 19 00:12:07 crc kubenswrapper[5109]: I0219 00:12:07.522854 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80502e28fe06e15d36671e55495cba46c7c8a2ff2200c2c22eadfe6690cc3ea0" Feb 19 00:12:07 crc kubenswrapper[5109]: I0219 00:12:07.522813 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29524320-lgkhz" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.231419 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-tt7nq_6a76c696-18d1-491c-9d23-36e91f949eed/kube-multus-additional-cni-plugins/0.log" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.231802 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337069 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") pod \"6a76c696-18d1-491c-9d23-36e91f949eed\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337214 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8cg2\" (UniqueName: \"kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2\") pod \"6a76c696-18d1-491c-9d23-36e91f949eed\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337305 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir\") pod \"6a76c696-18d1-491c-9d23-36e91f949eed\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337359 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready\") pod \"6a76c696-18d1-491c-9d23-36e91f949eed\" (UID: \"6a76c696-18d1-491c-9d23-36e91f949eed\") " Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337543 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "6a76c696-18d1-491c-9d23-36e91f949eed" (UID: "6a76c696-18d1-491c-9d23-36e91f949eed"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.337884 5109 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6a76c696-18d1-491c-9d23-36e91f949eed-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.338495 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready" (OuterVolumeSpecName: "ready") pod "6a76c696-18d1-491c-9d23-36e91f949eed" (UID: "6a76c696-18d1-491c-9d23-36e91f949eed"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.338753 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "6a76c696-18d1-491c-9d23-36e91f949eed" (UID: "6a76c696-18d1-491c-9d23-36e91f949eed"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.344262 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2" (OuterVolumeSpecName: "kube-api-access-p8cg2") pod "6a76c696-18d1-491c-9d23-36e91f949eed" (UID: "6a76c696-18d1-491c-9d23-36e91f949eed"). InnerVolumeSpecName "kube-api-access-p8cg2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.440017 5109 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/6a76c696-18d1-491c-9d23-36e91f949eed-ready\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.440075 5109 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6a76c696-18d1-491c-9d23-36e91f949eed-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.440096 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p8cg2\" (UniqueName: \"kubernetes.io/projected/6a76c696-18d1-491c-9d23-36e91f949eed-kube-api-access-p8cg2\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530581 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-tt7nq_6a76c696-18d1-491c-9d23-36e91f949eed/kube-multus-additional-cni-plugins/0.log" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530661 5109 generic.go:358] "Generic (PLEG): container finished" podID="6a76c696-18d1-491c-9d23-36e91f949eed" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" exitCode=137 Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530718 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" event={"ID":"6a76c696-18d1-491c-9d23-36e91f949eed","Type":"ContainerDied","Data":"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b"} Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530765 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" event={"ID":"6a76c696-18d1-491c-9d23-36e91f949eed","Type":"ContainerDied","Data":"ecb9334b93695da60442069932e925545359541391a3c220dc1f53b9bab7667c"} Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530795 5109 scope.go:117] "RemoveContainer" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.530811 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-tt7nq" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.551368 5109 scope.go:117] "RemoveContainer" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" Feb 19 00:12:08 crc kubenswrapper[5109]: E0219 00:12:08.551933 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b\": container with ID starting with dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b not found: ID does not exist" containerID="dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.552004 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b"} err="failed to get container status \"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b\": rpc error: code = NotFound desc = could not find container \"dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b\": container with ID starting with dd8900bd6bbd9b86bc69d14e2768dfed79fc2905cd22bd0c985046b7b94bcc9b not found: ID does not exist" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.565091 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tt7nq"] Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.569197 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-tt7nq"] Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.905251 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.905476 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:12:08 crc kubenswrapper[5109]: I0219 00:12:08.950541 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.001006 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" path="/var/lib/kubelet/pods/6a76c696-18d1-491c-9d23-36e91f949eed/volumes" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.287459 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.287508 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.337601 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.556730 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.591977 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.592070 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.602553 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.695168 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.695227 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:09 crc kubenswrapper[5109]: I0219 00:12:09.745768 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:10 crc kubenswrapper[5109]: I0219 00:12:10.581693 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.350928 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.405084 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.561458 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.741383 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.765669 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.765989 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w6z29" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="registry-server" containerID="cri-o://cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241" gracePeriod=2 Feb 19 00:12:11 crc kubenswrapper[5109]: I0219 00:12:11.790722 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:11 crc kubenswrapper[5109]: E0219 00:12:11.865865 5109 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce53be8_f7e0_44e3_b218_4f5f6985821d.slice/crio-conmon-cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce53be8_f7e0_44e3_b218_4f5f6985821d.slice/crio-cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241.scope\": RecentStats: unable to find data in memory cache]" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.154798 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.191707 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities\") pod \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.191781 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx7g2\" (UniqueName: \"kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2\") pod \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.191835 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content\") pod \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\" (UID: \"9ce53be8-f7e0-44e3-b218-4f5f6985821d\") " Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.200503 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities" (OuterVolumeSpecName: "utilities") pod "9ce53be8-f7e0-44e3-b218-4f5f6985821d" (UID: "9ce53be8-f7e0-44e3-b218-4f5f6985821d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.203667 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2" (OuterVolumeSpecName: "kube-api-access-dx7g2") pod "9ce53be8-f7e0-44e3-b218-4f5f6985821d" (UID: "9ce53be8-f7e0-44e3-b218-4f5f6985821d"). InnerVolumeSpecName "kube-api-access-dx7g2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.244652 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ce53be8-f7e0-44e3-b218-4f5f6985821d" (UID: "9ce53be8-f7e0-44e3-b218-4f5f6985821d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.293563 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dx7g2\" (UniqueName: \"kubernetes.io/projected/9ce53be8-f7e0-44e3-b218-4f5f6985821d-kube-api-access-dx7g2\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.293607 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.293623 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ce53be8-f7e0-44e3-b218-4f5f6985821d-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.377839 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.417253 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.556807 5109 generic.go:358] "Generic (PLEG): container finished" podID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerID="cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241" exitCode=0 Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.557660 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lhxln" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="registry-server" containerID="cri-o://381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b" gracePeriod=2 Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.556895 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerDied","Data":"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241"} Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.557758 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w6z29" event={"ID":"9ce53be8-f7e0-44e3-b218-4f5f6985821d","Type":"ContainerDied","Data":"8b0debd03dfbb30dadcb681ba5db3b12b74ec129d01a594fac01f1dd8e7ec9d0"} Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.557778 5109 scope.go:117] "RemoveContainer" containerID="cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.557007 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w6z29" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.580667 5109 scope.go:117] "RemoveContainer" containerID="1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.604690 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.604847 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w6z29"] Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.607388 5109 scope.go:117] "RemoveContainer" containerID="c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.671497 5109 scope.go:117] "RemoveContainer" containerID="cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241" Feb 19 00:12:12 crc kubenswrapper[5109]: E0219 00:12:12.671838 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241\": container with ID starting with cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241 not found: ID does not exist" containerID="cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.671880 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241"} err="failed to get container status \"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241\": rpc error: code = NotFound desc = could not find container \"cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241\": container with ID starting with cd6a2303d6b3eb48ad62d06ab06ef90c3f1dda6a292a5886ec2c2207817b0241 not found: ID does not exist" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.671906 5109 scope.go:117] "RemoveContainer" containerID="1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d" Feb 19 00:12:12 crc kubenswrapper[5109]: E0219 00:12:12.672290 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d\": container with ID starting with 1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d not found: ID does not exist" containerID="1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.672334 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d"} err="failed to get container status \"1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d\": rpc error: code = NotFound desc = could not find container \"1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d\": container with ID starting with 1936aa4b9158ce140849df76daa719c98b9e9deedfdeaece5438fb8265da495d not found: ID does not exist" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.672361 5109 scope.go:117] "RemoveContainer" containerID="c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7" Feb 19 00:12:12 crc kubenswrapper[5109]: E0219 00:12:12.672725 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7\": container with ID starting with c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7 not found: ID does not exist" containerID="c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.672773 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7"} err="failed to get container status \"c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7\": rpc error: code = NotFound desc = could not find container \"c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7\": container with ID starting with c59d73baa4e693b93bb88e50abd09ecba40d278cf17754495f9e49738c215cc7 not found: ID does not exist" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.745765 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.790202 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:12 crc kubenswrapper[5109]: I0219 00:12:12.988527 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.011767 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" path="/var/lib/kubelet/pods/9ce53be8-f7e0-44e3-b218-4f5f6985821d/volumes" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.108515 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content\") pod \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.108610 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities\") pod \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.108693 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ffgf\" (UniqueName: \"kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf\") pod \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\" (UID: \"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a\") " Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.109776 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities" (OuterVolumeSpecName: "utilities") pod "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" (UID: "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.124292 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf" (OuterVolumeSpecName: "kube-api-access-6ffgf") pod "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" (UID: "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a"). InnerVolumeSpecName "kube-api-access-6ffgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.164824 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" (UID: "be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.210751 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.211149 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.211160 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6ffgf\" (UniqueName: \"kubernetes.io/projected/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a-kube-api-access-6ffgf\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.579726 5109 generic.go:358] "Generic (PLEG): container finished" podID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerID="381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b" exitCode=0 Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.580222 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerDied","Data":"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b"} Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.580294 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lhxln" event={"ID":"be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a","Type":"ContainerDied","Data":"916d0af77839ed6028c16b09a32615bd4acdb05627b073cd0ee4fdea3ec49812"} Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.580327 5109 scope.go:117] "RemoveContainer" containerID="381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.580588 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lhxln" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.626772 5109 scope.go:117] "RemoveContainer" containerID="4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.630919 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.633669 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lhxln"] Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.656992 5109 scope.go:117] "RemoveContainer" containerID="2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.679441 5109 scope.go:117] "RemoveContainer" containerID="381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b" Feb 19 00:12:13 crc kubenswrapper[5109]: E0219 00:12:13.680108 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b\": container with ID starting with 381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b not found: ID does not exist" containerID="381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.680164 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b"} err="failed to get container status \"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b\": rpc error: code = NotFound desc = could not find container \"381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b\": container with ID starting with 381db38efee31b18e687904c057acc9e189863f0759a80953e2f060465ba0a3b not found: ID does not exist" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.680198 5109 scope.go:117] "RemoveContainer" containerID="4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec" Feb 19 00:12:13 crc kubenswrapper[5109]: E0219 00:12:13.680877 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec\": container with ID starting with 4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec not found: ID does not exist" containerID="4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.680922 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec"} err="failed to get container status \"4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec\": rpc error: code = NotFound desc = could not find container \"4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec\": container with ID starting with 4fb3002cf4ec8e3472816bb940240ddb28bb468e7dd3ff58a785418b6e28c4ec not found: ID does not exist" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.680965 5109 scope.go:117] "RemoveContainer" containerID="2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631" Feb 19 00:12:13 crc kubenswrapper[5109]: E0219 00:12:13.681429 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631\": container with ID starting with 2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631 not found: ID does not exist" containerID="2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.681467 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631"} err="failed to get container status \"2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631\": rpc error: code = NotFound desc = could not find container \"2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631\": container with ID starting with 2b229d5d62df7be9a877c53f6e2ec085d12ec6fe6067c04c8b714924d8034631 not found: ID does not exist" Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.973004 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:12:13 crc kubenswrapper[5109]: I0219 00:12:13.973565 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bmnjz" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="registry-server" containerID="cri-o://5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f" gracePeriod=2 Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.426423 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.528943 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content\") pod \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.529125 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities\") pod \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.529172 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trzg7\" (UniqueName: \"kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7\") pod \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\" (UID: \"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e\") " Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.530621 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities" (OuterVolumeSpecName: "utilities") pod "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" (UID: "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.544794 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7" (OuterVolumeSpecName: "kube-api-access-trzg7") pod "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" (UID: "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e"). InnerVolumeSpecName "kube-api-access-trzg7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.559816 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" (UID: "36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.589228 5109 generic.go:358] "Generic (PLEG): container finished" podID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerID="5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f" exitCode=0 Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.589328 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerDied","Data":"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f"} Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.589357 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmnjz" event={"ID":"36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e","Type":"ContainerDied","Data":"2a4cb88d1436cb0e61fc4ba51336f73acf6a1d7cea9b7ec9f57d4108aff8c960"} Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.589361 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmnjz" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.589400 5109 scope.go:117] "RemoveContainer" containerID="5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.615677 5109 scope.go:117] "RemoveContainer" containerID="82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.619057 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.622925 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmnjz"] Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.631145 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.631181 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-trzg7\" (UniqueName: \"kubernetes.io/projected/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-kube-api-access-trzg7\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.631193 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.639692 5109 scope.go:117] "RemoveContainer" containerID="491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.653856 5109 scope.go:117] "RemoveContainer" containerID="5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f" Feb 19 00:12:14 crc kubenswrapper[5109]: E0219 00:12:14.654468 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f\": container with ID starting with 5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f not found: ID does not exist" containerID="5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.654535 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f"} err="failed to get container status \"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f\": rpc error: code = NotFound desc = could not find container \"5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f\": container with ID starting with 5dcc760e08a280d78e5105b46a3feb3621f4fe701800296bd77becce9acca10f not found: ID does not exist" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.654576 5109 scope.go:117] "RemoveContainer" containerID="82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57" Feb 19 00:12:14 crc kubenswrapper[5109]: E0219 00:12:14.655433 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57\": container with ID starting with 82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57 not found: ID does not exist" containerID="82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.655482 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57"} err="failed to get container status \"82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57\": rpc error: code = NotFound desc = could not find container \"82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57\": container with ID starting with 82ff92429a89d62ac39aa1743adc5008b5f6c8fbbacdb2d550ae5eff2c775b57 not found: ID does not exist" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.655590 5109 scope.go:117] "RemoveContainer" containerID="491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529" Feb 19 00:12:14 crc kubenswrapper[5109]: E0219 00:12:14.656074 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529\": container with ID starting with 491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529 not found: ID does not exist" containerID="491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.656320 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529"} err="failed to get container status \"491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529\": rpc error: code = NotFound desc = could not find container \"491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529\": container with ID starting with 491c85933cc9c262155aebd16cee55f3adb58c834a727774e5f1770951f0b529 not found: ID does not exist" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.814426 5109 ???:1] "http: TLS handshake error from 192.168.126.11:42370: no serving certificate available for the kubelet" Feb 19 00:12:14 crc kubenswrapper[5109]: I0219 00:12:14.998680 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" path="/var/lib/kubelet/pods/36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e/volumes" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:14.999619 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" path="/var/lib/kubelet/pods/be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a/volumes" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475367 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475895 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475908 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475920 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475926 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475940 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475948 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475956 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475967 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475976 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="46cb4d4a-e24c-4036-8369-78813ade70e6" containerName="image-pruner" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475981 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="46cb4d4a-e24c-4036-8369-78813ade70e6" containerName="image-pruner" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475991 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.475997 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="extract-content" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476004 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476009 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476017 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476022 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476058 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476065 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="extract-utilities" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476073 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476078 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476087 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476093 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476171 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="9ce53be8-f7e0-44e3-b218-4f5f6985821d" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476182 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="6a76c696-18d1-491c-9d23-36e91f949eed" containerName="kube-multus-additional-cni-plugins" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476190 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="36fb4b9d-4c0b-4367-a2e5-5c3031cccd2e" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476199 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="46cb4d4a-e24c-4036-8369-78813ade70e6" containerName="image-pruner" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.476206 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="be83f018-7ae4-47b4-a6b1-6bd8fb6a1c3a" containerName="registry-server" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.484959 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.487649 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.488658 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.497902 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.542184 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.542458 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.644228 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.644460 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.644493 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.666859 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:15 crc kubenswrapper[5109]: I0219 00:12:15.812868 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.217005 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.362881 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.363141 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j85xw" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="registry-server" containerID="cri-o://0b722142d431af6b73d301f9c5c545dc4b3f3b4ad7de72c58e77f63bc5de2753" gracePeriod=2 Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.610765 5109 generic.go:358] "Generic (PLEG): container finished" podID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerID="0b722142d431af6b73d301f9c5c545dc4b3f3b4ad7de72c58e77f63bc5de2753" exitCode=0 Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.610884 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerDied","Data":"0b722142d431af6b73d301f9c5c545dc4b3f3b4ad7de72c58e77f63bc5de2753"} Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.612923 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"06cd086a-2c76-4888-a77c-47797ecd1718","Type":"ContainerStarted","Data":"62c39afd896f2cc92567871daf90d9f7f720db3dc2f383d1e1a30e0dcff1f896"} Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.857656 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.968248 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities\") pod \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.968410 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5fnp\" (UniqueName: \"kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp\") pod \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.968506 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content\") pod \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\" (UID: \"0bba1daa-2b6b-477c-b556-9ddcdfa319c3\") " Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.969216 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities" (OuterVolumeSpecName: "utilities") pod "0bba1daa-2b6b-477c-b556-9ddcdfa319c3" (UID: "0bba1daa-2b6b-477c-b556-9ddcdfa319c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:16 crc kubenswrapper[5109]: I0219 00:12:16.973816 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp" (OuterVolumeSpecName: "kube-api-access-f5fnp") pod "0bba1daa-2b6b-477c-b556-9ddcdfa319c3" (UID: "0bba1daa-2b6b-477c-b556-9ddcdfa319c3"). InnerVolumeSpecName "kube-api-access-f5fnp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.070299 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bba1daa-2b6b-477c-b556-9ddcdfa319c3" (UID: "0bba1daa-2b6b-477c-b556-9ddcdfa319c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.070352 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.070386 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f5fnp\" (UniqueName: \"kubernetes.io/projected/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-kube-api-access-f5fnp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.171784 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bba1daa-2b6b-477c-b556-9ddcdfa319c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.619033 5109 generic.go:358] "Generic (PLEG): container finished" podID="06cd086a-2c76-4888-a77c-47797ecd1718" containerID="536570d19decad1a22732c7370fb27d800befb8d9929f4ca61c7bb7673b56aec" exitCode=0 Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.619092 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"06cd086a-2c76-4888-a77c-47797ecd1718","Type":"ContainerDied","Data":"536570d19decad1a22732c7370fb27d800befb8d9929f4ca61c7bb7673b56aec"} Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.622051 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j85xw" event={"ID":"0bba1daa-2b6b-477c-b556-9ddcdfa319c3","Type":"ContainerDied","Data":"f8c87009a4d5cd1a344abc76a379c1bb86bd531aaf2398507641310e140a283e"} Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.622106 5109 scope.go:117] "RemoveContainer" containerID="0b722142d431af6b73d301f9c5c545dc4b3f3b4ad7de72c58e77f63bc5de2753" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.622292 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j85xw" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.644223 5109 scope.go:117] "RemoveContainer" containerID="8cc3e2dbfe0e00cb2ed0efbd9544e04a3cfdf498ee5cd412d10b897aa0669c5d" Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.660710 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.661556 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j85xw"] Feb 19 00:12:17 crc kubenswrapper[5109]: I0219 00:12:17.681828 5109 scope.go:117] "RemoveContainer" containerID="8b14255675ed93908dd5bf2e337ad3be249a32d035f6a3d9c3a6424a5df25a50" Feb 19 00:12:18 crc kubenswrapper[5109]: I0219 00:12:18.935311 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.001064 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access\") pod \"06cd086a-2c76-4888-a77c-47797ecd1718\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.001174 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir\") pod \"06cd086a-2c76-4888-a77c-47797ecd1718\" (UID: \"06cd086a-2c76-4888-a77c-47797ecd1718\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.001450 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "06cd086a-2c76-4888-a77c-47797ecd1718" (UID: "06cd086a-2c76-4888-a77c-47797ecd1718"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.001969 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" path="/var/lib/kubelet/pods/0bba1daa-2b6b-477c-b556-9ddcdfa319c3/volumes" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.004577 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.004863 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" podUID="0db50a50-d813-48a8-b407-38ef972ea7ae" containerName="controller-manager" containerID="cri-o://3ec8664fc9d645ae52086164179c00a2e32c036659d5b497658686ce2b404b88" gracePeriod=30 Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.016795 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "06cd086a-2c76-4888-a77c-47797ecd1718" (UID: "06cd086a-2c76-4888-a77c-47797ecd1718"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.018957 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.019212 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" podUID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" containerName="route-controller-manager" containerID="cri-o://1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89" gracePeriod=30 Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.102248 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/06cd086a-2c76-4888-a77c-47797ecd1718-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.102279 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06cd086a-2c76-4888-a77c-47797ecd1718-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.443528 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.474935 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.475732 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="extract-utilities" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.475802 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="extract-utilities" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.475863 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="06cd086a-2c76-4888-a77c-47797ecd1718" containerName="pruner" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.475913 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="06cd086a-2c76-4888-a77c-47797ecd1718" containerName="pruner" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.475973 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="extract-content" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476030 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="extract-content" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476084 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" containerName="route-controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476140 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" containerName="route-controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476204 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="registry-server" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476257 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="registry-server" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476402 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="06cd086a-2c76-4888-a77c-47797ecd1718" containerName="pruner" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476477 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" containerName="route-controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.476530 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="0bba1daa-2b6b-477c-b556-9ddcdfa319c3" containerName="registry-server" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.480131 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.492877 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.505877 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert\") pod \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.505959 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config\") pod \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.505988 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jv7p\" (UniqueName: \"kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p\") pod \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.506014 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp\") pod \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.506032 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca\") pod \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\" (UID: \"2c09a8ac-74ba-40e1-8a28-f19e961ec0db\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.506717 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp" (OuterVolumeSpecName: "tmp") pod "2c09a8ac-74ba-40e1-8a28-f19e961ec0db" (UID: "2c09a8ac-74ba-40e1-8a28-f19e961ec0db"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.506818 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca" (OuterVolumeSpecName: "client-ca") pod "2c09a8ac-74ba-40e1-8a28-f19e961ec0db" (UID: "2c09a8ac-74ba-40e1-8a28-f19e961ec0db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.507266 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config" (OuterVolumeSpecName: "config") pod "2c09a8ac-74ba-40e1-8a28-f19e961ec0db" (UID: "2c09a8ac-74ba-40e1-8a28-f19e961ec0db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.510978 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2c09a8ac-74ba-40e1-8a28-f19e961ec0db" (UID: "2c09a8ac-74ba-40e1-8a28-f19e961ec0db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.511013 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p" (OuterVolumeSpecName: "kube-api-access-6jv7p") pod "2c09a8ac-74ba-40e1-8a28-f19e961ec0db" (UID: "2c09a8ac-74ba-40e1-8a28-f19e961ec0db"). InnerVolumeSpecName "kube-api-access-6jv7p". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607285 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607541 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbh69\" (UniqueName: \"kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607586 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607613 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607709 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607825 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607840 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607852 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jv7p\" (UniqueName: \"kubernetes.io/projected/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-kube-api-access-6jv7p\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607863 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.607874 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2c09a8ac-74ba-40e1-8a28-f19e961ec0db-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.638943 5109 generic.go:358] "Generic (PLEG): container finished" podID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" containerID="1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89" exitCode=0 Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.639047 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" event={"ID":"2c09a8ac-74ba-40e1-8a28-f19e961ec0db","Type":"ContainerDied","Data":"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89"} Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.639061 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.639088 5109 scope.go:117] "RemoveContainer" containerID="1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.639076 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz" event={"ID":"2c09a8ac-74ba-40e1-8a28-f19e961ec0db","Type":"ContainerDied","Data":"e7ef927b16ece20212d293c7fd5475ba5841c56e979a38c9420a3d0f95b16d35"} Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.641140 5109 generic.go:358] "Generic (PLEG): container finished" podID="0db50a50-d813-48a8-b407-38ef972ea7ae" containerID="3ec8664fc9d645ae52086164179c00a2e32c036659d5b497658686ce2b404b88" exitCode=0 Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.641201 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" event={"ID":"0db50a50-d813-48a8-b407-38ef972ea7ae","Type":"ContainerDied","Data":"3ec8664fc9d645ae52086164179c00a2e32c036659d5b497658686ce2b404b88"} Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.642886 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.642904 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"06cd086a-2c76-4888-a77c-47797ecd1718","Type":"ContainerDied","Data":"62c39afd896f2cc92567871daf90d9f7f720db3dc2f383d1e1a30e0dcff1f896"} Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.642924 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62c39afd896f2cc92567871daf90d9f7f720db3dc2f383d1e1a30e0dcff1f896" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.653651 5109 scope.go:117] "RemoveContainer" containerID="1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89" Feb 19 00:12:19 crc kubenswrapper[5109]: E0219 00:12:19.653982 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89\": container with ID starting with 1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89 not found: ID does not exist" containerID="1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.654010 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89"} err="failed to get container status \"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89\": rpc error: code = NotFound desc = could not find container \"1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89\": container with ID starting with 1e2ef32e6a958edf6887fb02e76b22d76065866d9a9736e73f4ceb035fa39b89 not found: ID does not exist" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.669225 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.671431 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5fc9bb6544-cxvhz"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.709510 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.709574 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbh69\" (UniqueName: \"kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.709622 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.710110 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.710182 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.710296 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.710978 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.711263 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.715472 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.721922 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.728336 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbh69\" (UniqueName: \"kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69\") pod \"route-controller-manager-67c9d8ffb9-fsql5\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.743831 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.744358 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0db50a50-d813-48a8-b407-38ef972ea7ae" containerName="controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.744376 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db50a50-d813-48a8-b407-38ef972ea7ae" containerName="controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.744477 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="0db50a50-d813-48a8-b407-38ef972ea7ae" containerName="controller-manager" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.753900 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.760422 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.797023 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812001 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812043 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812106 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6xb6\" (UniqueName: \"kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812127 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812202 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.812277 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert\") pod \"0db50a50-d813-48a8-b407-38ef972ea7ae\" (UID: \"0db50a50-d813-48a8-b407-38ef972ea7ae\") " Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.814545 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.816625 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp" (OuterVolumeSpecName: "tmp") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.817340 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.817477 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config" (OuterVolumeSpecName: "config") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.824999 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.832813 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6" (OuterVolumeSpecName: "kube-api-access-d6xb6") pod "0db50a50-d813-48a8-b407-38ef972ea7ae" (UID: "0db50a50-d813-48a8-b407-38ef972ea7ae"). InnerVolumeSpecName "kube-api-access-d6xb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.913540 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.913839 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.913872 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rck6\" (UniqueName: \"kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.913890 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914059 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914225 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914332 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d6xb6\" (UniqueName: \"kubernetes.io/projected/0db50a50-d813-48a8-b407-38ef972ea7ae-kube-api-access-d6xb6\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914344 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914355 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0db50a50-d813-48a8-b407-38ef972ea7ae-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914366 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0db50a50-d813-48a8-b407-38ef972ea7ae-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914374 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:19 crc kubenswrapper[5109]: I0219 00:12:19.914385 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0db50a50-d813-48a8-b407-38ef972ea7ae-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.015870 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.015937 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.015996 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.016041 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5rck6\" (UniqueName: \"kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.016070 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.016109 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.017412 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.017404 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.017687 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.019723 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.021097 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.032022 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rck6\" (UniqueName: \"kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6\") pod \"controller-manager-86d58bf99b-kzvk9\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.071184 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.073058 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.086038 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.086345 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.088383 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.089159 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.191111 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:20 crc kubenswrapper[5109]: W0219 00:12:20.196538 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e233db3_eed7_4f24_a7a1_7ea1b472bc8b.slice/crio-214db834cec46eda9f2523983d726e7a2a317a732b10660311b1458d79619b35 WatchSource:0}: Error finding container 214db834cec46eda9f2523983d726e7a2a317a732b10660311b1458d79619b35: Status 404 returned error can't find the container with id 214db834cec46eda9f2523983d726e7a2a317a732b10660311b1458d79619b35 Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.218362 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.218425 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.218455 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.253411 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:20 crc kubenswrapper[5109]: W0219 00:12:20.258870 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7983108_d607_4638_bc24_f630134aaecf.slice/crio-a5f5b7eb626bbafbfe64faff997aebb2091c11fc343aadbde6a57d565441ad2d WatchSource:0}: Error finding container a5f5b7eb626bbafbfe64faff997aebb2091c11fc343aadbde6a57d565441ad2d: Status 404 returned error can't find the container with id a5f5b7eb626bbafbfe64faff997aebb2091c11fc343aadbde6a57d565441ad2d Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.319528 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.319616 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.319660 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.319725 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.319756 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.338411 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access\") pod \"installer-12-crc\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.437110 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.650239 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" event={"ID":"0db50a50-d813-48a8-b407-38ef972ea7ae","Type":"ContainerDied","Data":"82508f15de8e2e16e6d29452d9e01e42ddd7462aebdacbff4641b4c09b4626ec"} Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.650287 5109 scope.go:117] "RemoveContainer" containerID="3ec8664fc9d645ae52086164179c00a2e32c036659d5b497658686ce2b404b88" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.650413 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-fcd865f45-vmg6m" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.653585 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" event={"ID":"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b","Type":"ContainerStarted","Data":"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07"} Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.653645 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" event={"ID":"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b","Type":"ContainerStarted","Data":"214db834cec46eda9f2523983d726e7a2a317a732b10660311b1458d79619b35"} Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.654567 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.657225 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" event={"ID":"d7983108-d607-4638-bc24-f630134aaecf","Type":"ContainerStarted","Data":"318174b2d7a1cd822985663f26fdaa7c88112f34f0e347154b39fb5cc36d7875"} Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.657287 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" event={"ID":"d7983108-d607-4638-bc24-f630134aaecf","Type":"ContainerStarted","Data":"a5f5b7eb626bbafbfe64faff997aebb2091c11fc343aadbde6a57d565441ad2d"} Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.657310 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.680804 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" podStartSLOduration=1.680788003 podStartE2EDuration="1.680788003s" podCreationTimestamp="2026-02-19 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:20.671988606 +0000 UTC m=+170.508228605" watchObservedRunningTime="2026-02-19 00:12:20.680788003 +0000 UTC m=+170.517027992" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.682871 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.685227 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-fcd865f45-vmg6m"] Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.827286 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" podStartSLOduration=1.827265419 podStartE2EDuration="1.827265419s" podCreationTimestamp="2026-02-19 00:12:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:20.709912779 +0000 UTC m=+170.546152768" watchObservedRunningTime="2026-02-19 00:12:20.827265419 +0000 UTC m=+170.663505418" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.839791 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.998126 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db50a50-d813-48a8-b407-38ef972ea7ae" path="/var/lib/kubelet/pods/0db50a50-d813-48a8-b407-38ef972ea7ae/volumes" Feb 19 00:12:20 crc kubenswrapper[5109]: I0219 00:12:20.998797 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c09a8ac-74ba-40e1-8a28-f19e961ec0db" path="/var/lib/kubelet/pods/2c09a8ac-74ba-40e1-8a28-f19e961ec0db/volumes" Feb 19 00:12:21 crc kubenswrapper[5109]: I0219 00:12:21.064828 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:21 crc kubenswrapper[5109]: I0219 00:12:21.114257 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:21 crc kubenswrapper[5109]: I0219 00:12:21.663866 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec","Type":"ContainerStarted","Data":"98325bac1491c7d1356ffd40914af7683ec717bb46ed96179732018a364b06d7"} Feb 19 00:12:21 crc kubenswrapper[5109]: I0219 00:12:21.664113 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec","Type":"ContainerStarted","Data":"75266c21987dfc8c1b068517a095ff58612e9fcde84986a3be3f06a0ba4c6b2c"} Feb 19 00:12:21 crc kubenswrapper[5109]: I0219 00:12:21.680005 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=1.679954097 podStartE2EDuration="1.679954097s" podCreationTimestamp="2026-02-19 00:12:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:21.678769581 +0000 UTC m=+171.515009570" watchObservedRunningTime="2026-02-19 00:12:21.679954097 +0000 UTC m=+171.516194096" Feb 19 00:12:22 crc kubenswrapper[5109]: I0219 00:12:22.498736 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:12:30 crc kubenswrapper[5109]: I0219 00:12:30.430015 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.021261 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.022429 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" podUID="d7983108-d607-4638-bc24-f630134aaecf" containerName="controller-manager" containerID="cri-o://318174b2d7a1cd822985663f26fdaa7c88112f34f0e347154b39fb5cc36d7875" gracePeriod=30 Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.043533 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.044077 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" podUID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" containerName="route-controller-manager" containerID="cri-o://70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07" gracePeriod=30 Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.573919 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.603584 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.604134 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" containerName="route-controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.604152 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" containerName="route-controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.604229 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" containerName="route-controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.608111 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.620067 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.696913 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbh69\" (UniqueName: \"kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69\") pod \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.696989 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config\") pod \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697020 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca\") pod \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697069 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp\") pod \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697116 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert\") pod \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\" (UID: \"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697288 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697357 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697376 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697413 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwqrb\" (UniqueName: \"kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697441 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.697651 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp" (OuterVolumeSpecName: "tmp") pod "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" (UID: "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.698046 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config" (OuterVolumeSpecName: "config") pod "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" (UID: "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.698268 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" (UID: "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.702471 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" (UID: "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.703145 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69" (OuterVolumeSpecName: "kube-api-access-rbh69") pod "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" (UID: "2e233db3-eed7-4f24-a7a1-7ea1b472bc8b"). InnerVolumeSpecName "kube-api-access-rbh69". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.776007 5109 generic.go:358] "Generic (PLEG): container finished" podID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" containerID="70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07" exitCode=0 Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.776164 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.776240 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" event={"ID":"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b","Type":"ContainerDied","Data":"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07"} Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.776340 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5" event={"ID":"2e233db3-eed7-4f24-a7a1-7ea1b472bc8b","Type":"ContainerDied","Data":"214db834cec46eda9f2523983d726e7a2a317a732b10660311b1458d79619b35"} Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.776363 5109 scope.go:117] "RemoveContainer" containerID="70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.778134 5109 generic.go:358] "Generic (PLEG): container finished" podID="d7983108-d607-4638-bc24-f630134aaecf" containerID="318174b2d7a1cd822985663f26fdaa7c88112f34f0e347154b39fb5cc36d7875" exitCode=0 Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.778369 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" event={"ID":"d7983108-d607-4638-bc24-f630134aaecf","Type":"ContainerDied","Data":"318174b2d7a1cd822985663f26fdaa7c88112f34f0e347154b39fb5cc36d7875"} Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.790901 5109 scope.go:117] "RemoveContainer" containerID="70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07" Feb 19 00:12:39 crc kubenswrapper[5109]: E0219 00:12:39.794442 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07\": container with ID starting with 70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07 not found: ID does not exist" containerID="70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.794497 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07"} err="failed to get container status \"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07\": rpc error: code = NotFound desc = could not find container \"70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07\": container with ID starting with 70143c27ed1e5b295882dd74ca1a6f79be032a5c043c9f6ede27636ada57de07 not found: ID does not exist" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799300 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799337 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799387 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mwqrb\" (UniqueName: \"kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799422 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799444 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799490 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rbh69\" (UniqueName: \"kubernetes.io/projected/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-kube-api-access-rbh69\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799506 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799517 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799528 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.799538 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.801140 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.801462 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.802370 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.805235 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.810602 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.813688 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-67c9d8ffb9-fsql5"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.813698 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.818900 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwqrb\" (UniqueName: \"kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb\") pod \"route-controller-manager-76975c9bd5-zmk66\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.837481 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.838175 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d7983108-d607-4638-bc24-f630134aaecf" containerName="controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.838201 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7983108-d607-4638-bc24-f630134aaecf" containerName="controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.838314 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d7983108-d607-4638-bc24-f630134aaecf" containerName="controller-manager" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.846018 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.848276 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900120 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900176 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900220 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rck6\" (UniqueName: \"kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900704 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp" (OuterVolumeSpecName: "tmp") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900763 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900809 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.900906 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert\") pod \"d7983108-d607-4638-bc24-f630134aaecf\" (UID: \"d7983108-d607-4638-bc24-f630134aaecf\") " Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901275 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca" (OuterVolumeSpecName: "client-ca") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901326 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901372 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config" (OuterVolumeSpecName: "config") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901891 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d7983108-d607-4638-bc24-f630134aaecf-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901913 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901923 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.901933 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d7983108-d607-4638-bc24-f630134aaecf-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.903451 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6" (OuterVolumeSpecName: "kube-api-access-5rck6") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "kube-api-access-5rck6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.904561 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7983108-d607-4638-bc24-f630134aaecf" (UID: "d7983108-d607-4638-bc24-f630134aaecf"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:39 crc kubenswrapper[5109]: I0219 00:12:39.920075 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.002969 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003411 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003480 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003554 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swssv\" (UniqueName: \"kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003604 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003751 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003870 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7983108-d607-4638-bc24-f630134aaecf-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.003890 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5rck6\" (UniqueName: \"kubernetes.io/projected/d7983108-d607-4638-bc24-f630134aaecf-kube-api-access-5rck6\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105577 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105623 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-swssv\" (UniqueName: \"kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105763 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105844 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105958 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.105995 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.106359 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.107832 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.107888 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.107977 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.113773 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.123092 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-swssv\" (UniqueName: \"kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv\") pod \"controller-manager-5549fcb785-8z6q8\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.162321 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.324674 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66"] Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.609026 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:12:40 crc kubenswrapper[5109]: W0219 00:12:40.613153 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2081eb95_0888_4007_a776_c0d49ad86851.slice/crio-ac9951ae2e8edf0941d50f496da0f7f49b8da2bcfe78b1f1bbd6a5c692831b64 WatchSource:0}: Error finding container ac9951ae2e8edf0941d50f496da0f7f49b8da2bcfe78b1f1bbd6a5c692831b64: Status 404 returned error can't find the container with id ac9951ae2e8edf0941d50f496da0f7f49b8da2bcfe78b1f1bbd6a5c692831b64 Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.786227 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.786234 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-86d58bf99b-kzvk9" event={"ID":"d7983108-d607-4638-bc24-f630134aaecf","Type":"ContainerDied","Data":"a5f5b7eb626bbafbfe64faff997aebb2091c11fc343aadbde6a57d565441ad2d"} Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.786401 5109 scope.go:117] "RemoveContainer" containerID="318174b2d7a1cd822985663f26fdaa7c88112f34f0e347154b39fb5cc36d7875" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.787764 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" event={"ID":"43bba815-cfda-4121-a857-94c60d92f1fb","Type":"ContainerStarted","Data":"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd"} Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.787814 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" event={"ID":"43bba815-cfda-4121-a857-94c60d92f1fb","Type":"ContainerStarted","Data":"40087f6ca4eeb708554521e7a23c4ea8f5b9f22c5e31629341f8981f7372587b"} Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.789189 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.791473 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" event={"ID":"2081eb95-0888-4007-a776-c0d49ad86851","Type":"ContainerStarted","Data":"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1"} Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.791507 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" event={"ID":"2081eb95-0888-4007-a776-c0d49ad86851","Type":"ContainerStarted","Data":"ac9951ae2e8edf0941d50f496da0f7f49b8da2bcfe78b1f1bbd6a5c692831b64"} Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.791731 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.793413 5109 patch_prober.go:28] interesting pod/controller-manager-5549fcb785-8z6q8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.793578 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" podUID="2081eb95-0888-4007-a776-c0d49ad86851" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.810515 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" podStartSLOduration=1.810483522 podStartE2EDuration="1.810483522s" podCreationTimestamp="2026-02-19 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:40.808502162 +0000 UTC m=+190.644742151" watchObservedRunningTime="2026-02-19 00:12:40.810483522 +0000 UTC m=+190.646723531" Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.823241 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:40 crc kubenswrapper[5109]: I0219 00:12:40.826231 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-86d58bf99b-kzvk9"] Feb 19 00:12:41 crc kubenswrapper[5109]: I0219 00:12:41.013956 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e233db3-eed7-4f24-a7a1-7ea1b472bc8b" path="/var/lib/kubelet/pods/2e233db3-eed7-4f24-a7a1-7ea1b472bc8b/volumes" Feb 19 00:12:41 crc kubenswrapper[5109]: I0219 00:12:41.014724 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7983108-d607-4638-bc24-f630134aaecf" path="/var/lib/kubelet/pods/d7983108-d607-4638-bc24-f630134aaecf/volumes" Feb 19 00:12:41 crc kubenswrapper[5109]: I0219 00:12:41.236191 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:12:41 crc kubenswrapper[5109]: I0219 00:12:41.254063 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" podStartSLOduration=2.254045044 podStartE2EDuration="2.254045044s" podCreationTimestamp="2026-02-19 00:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:40.849133948 +0000 UTC m=+190.685373927" watchObservedRunningTime="2026-02-19 00:12:41.254045044 +0000 UTC m=+191.090285033" Feb 19 00:12:41 crc kubenswrapper[5109]: I0219 00:12:41.807224 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:12:47 crc kubenswrapper[5109]: I0219 00:12:47.552955 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" containerID="cri-o://86ef05141ce80e0771e179df6537d063346d9fc4316f14154f658d6d5fe5223a" gracePeriod=15 Feb 19 00:12:47 crc kubenswrapper[5109]: I0219 00:12:47.837539 5109 generic.go:358] "Generic (PLEG): container finished" podID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerID="86ef05141ce80e0771e179df6537d063346d9fc4316f14154f658d6d5fe5223a" exitCode=0 Feb 19 00:12:47 crc kubenswrapper[5109]: I0219 00:12:47.837617 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" event={"ID":"ffac205b-047e-4cf8-bcc5-39a818ee5655","Type":"ContainerDied","Data":"86ef05141ce80e0771e179df6537d063346d9fc4316f14154f658d6d5fe5223a"} Feb 19 00:12:47 crc kubenswrapper[5109]: I0219 00:12:47.984252 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.016955 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017001 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017022 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017070 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017229 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017289 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017314 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017353 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6ks\" (UniqueName: \"kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017399 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.017934 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.018195 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.018360 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.018896 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7d44778b44-jmql6"] Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.019786 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.019817 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.019999 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" containerName="oauth-openshift" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.028878 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.029824 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks" (OuterVolumeSpecName: "kube-api-access-rb6ks") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "kube-api-access-rb6ks". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.030441 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.032463 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.033526 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.036342 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.041131 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d44778b44-jmql6"] Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.041312 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.118384 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.118441 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.118886 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.120235 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.120273 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.118954 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.120476 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error\") pod \"ffac205b-047e-4cf8-bcc5-39a818ee5655\" (UID: \"ffac205b-047e-4cf8-bcc5-39a818ee5655\") " Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.120888 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.120956 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-dir\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121007 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5txq5\" (UniqueName: \"kubernetes.io/projected/ae63e40b-878d-49c1-b89c-67506b6a494f-kube-api-access-5txq5\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121055 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121086 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121155 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-login\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121218 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-policies\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121239 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121281 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121322 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121417 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121451 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121493 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-error\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121574 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-session\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121762 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121780 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121870 5109 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121891 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121911 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121925 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121938 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121951 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121969 5109 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ffac205b-047e-4cf8-bcc5-39a818ee5655-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121983 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rb6ks\" (UniqueName: \"kubernetes.io/projected/ffac205b-047e-4cf8-bcc5-39a818ee5655-kube-api-access-rb6ks\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.121996 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.124276 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.124600 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.125144 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "ffac205b-047e-4cf8-bcc5-39a818ee5655" (UID: "ffac205b-047e-4cf8-bcc5-39a818ee5655"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223007 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-session\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223065 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223091 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-dir\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223116 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5txq5\" (UniqueName: \"kubernetes.io/projected/ae63e40b-878d-49c1-b89c-67506b6a494f-kube-api-access-5txq5\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223174 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-dir\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223208 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223226 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223257 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-login\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223285 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-policies\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223302 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223323 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.223344 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224009 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224111 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224211 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-error\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224341 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224375 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224398 5109 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ffac205b-047e-4cf8-bcc5-39a818ee5655-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.224553 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.225334 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-audit-policies\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.225594 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.226266 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.227651 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.228808 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-session\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.229131 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-login\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.229729 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.230294 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.230980 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-user-template-error\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.232301 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.232334 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ae63e40b-878d-49c1-b89c-67506b6a494f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.242903 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5txq5\" (UniqueName: \"kubernetes.io/projected/ae63e40b-878d-49c1-b89c-67506b6a494f-kube-api-access-5txq5\") pod \"oauth-openshift-7d44778b44-jmql6\" (UID: \"ae63e40b-878d-49c1-b89c-67506b6a494f\") " pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.372693 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.819401 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7d44778b44-jmql6"] Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.845535 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" event={"ID":"ae63e40b-878d-49c1-b89c-67506b6a494f","Type":"ContainerStarted","Data":"bf87f619132ef702154f5bc0576c1caff778304eb148e7c9940d989970e82026"} Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.850993 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" event={"ID":"ffac205b-047e-4cf8-bcc5-39a818ee5655","Type":"ContainerDied","Data":"1eaf03126aca033072718f7ea3256d48c27efdc7dd974e0a75daddb5da63a012"} Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.851037 5109 scope.go:117] "RemoveContainer" containerID="86ef05141ce80e0771e179df6537d063346d9fc4316f14154f658d6d5fe5223a" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.851114 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-nsncq" Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.895844 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:12:48 crc kubenswrapper[5109]: I0219 00:12:48.900602 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-nsncq"] Feb 19 00:12:49 crc kubenswrapper[5109]: I0219 00:12:49.004169 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffac205b-047e-4cf8-bcc5-39a818ee5655" path="/var/lib/kubelet/pods/ffac205b-047e-4cf8-bcc5-39a818ee5655/volumes" Feb 19 00:12:49 crc kubenswrapper[5109]: I0219 00:12:49.858330 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" event={"ID":"ae63e40b-878d-49c1-b89c-67506b6a494f","Type":"ContainerStarted","Data":"e3d3acb4f64a85a8acc489beab7674d3ecaa9bfa444a570f036429d92b62cf06"} Feb 19 00:12:49 crc kubenswrapper[5109]: I0219 00:12:49.858559 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:49 crc kubenswrapper[5109]: I0219 00:12:49.865807 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" Feb 19 00:12:49 crc kubenswrapper[5109]: I0219 00:12:49.881484 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7d44778b44-jmql6" podStartSLOduration=27.881465966 podStartE2EDuration="27.881465966s" podCreationTimestamp="2026-02-19 00:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:12:49.879849619 +0000 UTC m=+199.716089628" watchObservedRunningTime="2026-02-19 00:12:49.881465966 +0000 UTC m=+199.717705955" Feb 19 00:12:55 crc kubenswrapper[5109]: I0219 00:12:55.805290 5109 ???:1] "http: TLS handshake error from 192.168.126.11:43624: no serving certificate available for the kubelet" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.900071 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.908469 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.924405 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.944734 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.986747 5109 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987190 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b" gracePeriod=15 Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987274 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987356 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987368 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae" gracePeriod=15 Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987415 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987438 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d" gracePeriod=15 Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987479 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff" gracePeriod=15 Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987540 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.987423 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1" gracePeriod=15 Feb 19 00:12:58 crc kubenswrapper[5109]: I0219 00:12:58.989866 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.006560 5109 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.007956 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.007992 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008023 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008039 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008061 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008077 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008096 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008111 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008135 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008151 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008215 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008234 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008251 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008267 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008291 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008309 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008340 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008425 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008452 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008468 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008743 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008772 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008795 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008815 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008843 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008897 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.008928 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.009266 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.009285 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.091829 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.091890 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.091926 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.091973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092034 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092064 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092142 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092192 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092247 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092254 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092306 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092600 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092864 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092872 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.092953 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194401 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194723 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194778 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194825 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194865 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194984 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.194581 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.195090 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.195870 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.195940 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.245890 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:12:59 crc kubenswrapper[5109]: W0219 00:12:59.264309 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-0780c9e91ad98d673e5989b4ec16f149739be78689f5eaf998bbaf339a90574c WatchSource:0}: Error finding container 0780c9e91ad98d673e5989b4ec16f149739be78689f5eaf998bbaf339a90574c: Status 404 returned error can't find the container with id 0780c9e91ad98d673e5989b4ec16f149739be78689f5eaf998bbaf339a90574c Feb 19 00:12:59 crc kubenswrapper[5109]: E0219 00:12:59.267182 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d772dfeed1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:12:59.266419995 +0000 UTC m=+209.102659994,LastTimestamp:2026-02-19 00:12:59.266419995 +0000 UTC m=+209.102659994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.935428 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.937546 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.938768 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae" exitCode=0 Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.938818 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1" exitCode=0 Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.938839 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d" exitCode=0 Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.938854 5109 scope.go:117] "RemoveContainer" containerID="902dad25ca201baa112466ebe06b651bf942a434327c27f14679c7cfa3407c99" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.938856 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff" exitCode=2 Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.942701 5109 generic.go:358] "Generic (PLEG): container finished" podID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" containerID="98325bac1491c7d1356ffd40914af7683ec717bb46ed96179732018a364b06d7" exitCode=0 Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.942847 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec","Type":"ContainerDied","Data":"98325bac1491c7d1356ffd40914af7683ec717bb46ed96179732018a364b06d7"} Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.944300 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.945087 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.945495 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246"} Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.945671 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"0780c9e91ad98d673e5989b4ec16f149739be78689f5eaf998bbaf339a90574c"} Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.946670 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:12:59 crc kubenswrapper[5109]: I0219 00:12:59.947426 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:00 crc kubenswrapper[5109]: I0219 00:13:00.960331 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:13:00 crc kubenswrapper[5109]: I0219 00:13:00.998306 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:00 crc kubenswrapper[5109]: I0219 00:13:00.998876 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.364916 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.366172 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.366827 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.367164 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.367611 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.410620 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.411323 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.411686 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.412176 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530086 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir\") pod \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530274 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" (UID: "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530391 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access\") pod \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530488 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530530 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530599 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530781 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530856 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.530888 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock\") pod \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\" (UID: \"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec\") " Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531110 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock" (OuterVolumeSpecName: "var-lock") pod "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" (UID: "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531283 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531316 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531349 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531577 5109 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531601 5109 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531618 5109 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531672 5109 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531709 5109 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.531763 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.534112 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.539311 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" (UID: "6cdb76ac-846f-4c53-aca6-b0af36fbc9ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.633104 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6cdb76ac-846f-4c53-aca6-b0af36fbc9ec-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.633137 5109 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.633149 5109 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.974159 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.975875 5109 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b" exitCode=0 Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.976034 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.976088 5109 scope.go:117] "RemoveContainer" containerID="84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.978689 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"6cdb76ac-846f-4c53-aca6-b0af36fbc9ec","Type":"ContainerDied","Data":"75266c21987dfc8c1b068517a095ff58612e9fcde84986a3be3f06a0ba4c6b2c"} Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.978747 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75266c21987dfc8c1b068517a095ff58612e9fcde84986a3be3f06a0ba4c6b2c" Feb 19 00:13:01 crc kubenswrapper[5109]: I0219 00:13:01.978800 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.006527 5109 scope.go:117] "RemoveContainer" containerID="642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.016738 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.017210 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.017711 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.018240 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.018805 5109 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.019272 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.038442 5109 scope.go:117] "RemoveContainer" containerID="e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.062455 5109 scope.go:117] "RemoveContainer" containerID="27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.088689 5109 scope.go:117] "RemoveContainer" containerID="400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.111263 5109 scope.go:117] "RemoveContainer" containerID="ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.182267 5109 scope.go:117] "RemoveContainer" containerID="84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.182723 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae\": container with ID starting with 84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae not found: ID does not exist" containerID="84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.182753 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae"} err="failed to get container status \"84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae\": rpc error: code = NotFound desc = could not find container \"84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae\": container with ID starting with 84816fefc9881cada119f65b2e560e6892698489a82882651bef0e7548aec0ae not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.182775 5109 scope.go:117] "RemoveContainer" containerID="642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.183170 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\": container with ID starting with 642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1 not found: ID does not exist" containerID="642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.183345 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1"} err="failed to get container status \"642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\": rpc error: code = NotFound desc = could not find container \"642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1\": container with ID starting with 642c96975ca33aab6da47cbc137db1ccd39d63c313e6f61606ac342d2cde35c1 not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.183520 5109 scope.go:117] "RemoveContainer" containerID="e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.184076 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\": container with ID starting with e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d not found: ID does not exist" containerID="e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.184143 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d"} err="failed to get container status \"e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\": rpc error: code = NotFound desc = could not find container \"e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d\": container with ID starting with e99064b437d9f1a4f18360c24a445b8c8321f5950ec6dea3285f0948e174a41d not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.184180 5109 scope.go:117] "RemoveContainer" containerID="27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.184671 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\": container with ID starting with 27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff not found: ID does not exist" containerID="27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.184696 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff"} err="failed to get container status \"27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\": rpc error: code = NotFound desc = could not find container \"27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff\": container with ID starting with 27089a0147d7ef820732adaea3574b6f86454860ea21ec3646235bfa14658aff not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.184713 5109 scope.go:117] "RemoveContainer" containerID="400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.185060 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\": container with ID starting with 400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b not found: ID does not exist" containerID="400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.185111 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b"} err="failed to get container status \"400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\": rpc error: code = NotFound desc = could not find container \"400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b\": container with ID starting with 400d1372d453484388fae2a7c682606d70215cca26d6ec221000a9b153d0178b not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.185148 5109 scope.go:117] "RemoveContainer" containerID="ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.185483 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\": container with ID starting with ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445 not found: ID does not exist" containerID="ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445" Feb 19 00:13:02 crc kubenswrapper[5109]: I0219 00:13:02.185693 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445"} err="failed to get container status \"ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\": rpc error: code = NotFound desc = could not find container \"ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445\": container with ID starting with ad20a05792013c3977a68ca37e931f846793a8a58a822b9cb8e4b3a360dea445 not found: ID does not exist" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.683088 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:02Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:02Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:02Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.683850 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.684415 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.685017 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.685310 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:02 crc kubenswrapper[5109]: E0219 00:13:02.685328 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:13:03 crc kubenswrapper[5109]: I0219 00:13:03.003439 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.729542 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.730983 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.731429 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.732121 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.732697 5109 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:05 crc kubenswrapper[5109]: I0219 00:13:05.732737 5109 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.733081 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="200ms" Feb 19 00:13:05 crc kubenswrapper[5109]: E0219 00:13:05.935123 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="400ms" Feb 19 00:13:06 crc kubenswrapper[5109]: E0219 00:13:06.336338 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="800ms" Feb 19 00:13:07 crc kubenswrapper[5109]: E0219 00:13:07.138053 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="1.6s" Feb 19 00:13:08 crc kubenswrapper[5109]: E0219 00:13:08.739811 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="3.2s" Feb 19 00:13:08 crc kubenswrapper[5109]: E0219 00:13:08.776776 5109 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.196:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18957d772dfeed1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-19 00:12:59.266419995 +0000 UTC m=+209.102659994,LastTimestamp:2026-02-19 00:12:59.266419995 +0000 UTC m=+209.102659994,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 19 00:13:11 crc kubenswrapper[5109]: I0219 00:13:11.001158 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:11 crc kubenswrapper[5109]: I0219 00:13:11.005004 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:11 crc kubenswrapper[5109]: E0219 00:13:11.057340 5109 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.196:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" volumeName="registry-storage" Feb 19 00:13:11 crc kubenswrapper[5109]: E0219 00:13:11.941735 5109 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" interval="6.4s" Feb 19 00:13:11 crc kubenswrapper[5109]: I0219 00:13:11.991286 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:11 crc kubenswrapper[5109]: I0219 00:13:11.992442 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:11 crc kubenswrapper[5109]: I0219 00:13:11.993032 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: I0219 00:13:12.016360 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:12 crc kubenswrapper[5109]: I0219 00:13:12.016389 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.016866 5109 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:12 crc kubenswrapper[5109]: I0219 00:13:12.017336 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:12 crc kubenswrapper[5109]: W0219 00:13:12.046933 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-112f9672e850c416eb6f2cd179eba130ff3c09af264d8bf3d253a6c3c34d6d0e WatchSource:0}: Error finding container 112f9672e850c416eb6f2cd179eba130ff3c09af264d8bf3d253a6c3c34d6d0e: Status 404 returned error can't find the container with id 112f9672e850c416eb6f2cd179eba130ff3c09af264d8bf3d253a6c3c34d6d0e Feb 19 00:13:12 crc kubenswrapper[5109]: I0219 00:13:12.058559 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"112f9672e850c416eb6f2cd179eba130ff3c09af264d8bf3d253a6c3c34d6d0e"} Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.767346 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:12Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:12Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:12Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-19T00:13:12Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.771209 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.771937 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.772424 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.772901 5109 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:12 crc kubenswrapper[5109]: E0219 00:13:12.772938 5109 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.067130 5109 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="9ec163553c81b13c08300e634b8247e0214ba9fc2bd8285d7294156edf1602e6" exitCode=0 Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.067204 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"9ec163553c81b13c08300e634b8247e0214ba9fc2bd8285d7294156edf1602e6"} Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.067888 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.067935 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.068395 5109 status_manager.go:895] "Failed to get status for pod" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:13 crc kubenswrapper[5109]: E0219 00:13:13.068701 5109 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:13 crc kubenswrapper[5109]: I0219 00:13:13.068886 5109 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.129.56.196:6443: connect: connection refused" Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.080771 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.081047 5109 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04" exitCode=1 Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.081155 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04"} Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.081712 5109 scope.go:117] "RemoveContainer" containerID="5b9fc5c4aaf97fb47e82f7bdc892fbd99a46d205841861db8603dae74e1d0d04" Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.092783 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"bf15150f46c4c3a5e547ae068077bb49bdcb9f3db8f91647b505918ef54ee415"} Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.092819 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"dec0f5f981d16f22fdc8728de13b030917dbee061210244a46dbb9921ec6348f"} Feb 19 00:13:14 crc kubenswrapper[5109]: I0219 00:13:14.092832 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"722e05649c3990013800ecd7f1dd12f6335a80e7158425ed2584124dd979ad53"} Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.101118 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.101450 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"aec2596e9124e8381fd750318109e2ad00a66dc8cd2adb85afa2ceeb14abdcb7"} Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.105124 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"4883659c5cb0f2a75327d16a068ad713013ffbff3c16dd138302ee2e8abb2870"} Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.105155 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"d6bbcae5179532daf06b7c465e6d60b20dbb471d60d27fb81d838bb105247268"} Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.105386 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.105417 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:15 crc kubenswrapper[5109]: I0219 00:13:15.105463 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:17 crc kubenswrapper[5109]: I0219 00:13:17.018064 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:17 crc kubenswrapper[5109]: I0219 00:13:17.018317 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:17 crc kubenswrapper[5109]: I0219 00:13:17.026147 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:17 crc kubenswrapper[5109]: I0219 00:13:17.463007 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:17 crc kubenswrapper[5109]: I0219 00:13:17.471531 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:18 crc kubenswrapper[5109]: I0219 00:13:18.124429 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:18 crc kubenswrapper[5109]: I0219 00:13:18.289407 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:18 crc kubenswrapper[5109]: I0219 00:13:18.289491 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:20 crc kubenswrapper[5109]: I0219 00:13:20.712956 5109 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:20 crc kubenswrapper[5109]: I0219 00:13:20.713366 5109 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.009325 5109 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="c9c9afb9-f83b-4172-a2a9-10f5f2315465" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.152742 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.152795 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.158562 5109 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="c9c9afb9-f83b-4172-a2a9-10f5f2315465" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.161698 5109 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://722e05649c3990013800ecd7f1dd12f6335a80e7158425ed2584124dd979ad53" Feb 19 00:13:21 crc kubenswrapper[5109]: I0219 00:13:21.161923 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:22 crc kubenswrapper[5109]: I0219 00:13:22.159469 5109 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:22 crc kubenswrapper[5109]: I0219 00:13:22.159507 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f6b74d2e-e32f-4317-a051-fc2f98ac2928" Feb 19 00:13:22 crc kubenswrapper[5109]: I0219 00:13:22.164331 5109 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="c9c9afb9-f83b-4172-a2a9-10f5f2315465" Feb 19 00:13:29 crc kubenswrapper[5109]: I0219 00:13:29.137904 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 19 00:13:30 crc kubenswrapper[5109]: I0219 00:13:30.565195 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 19 00:13:30 crc kubenswrapper[5109]: I0219 00:13:30.657541 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 19 00:13:30 crc kubenswrapper[5109]: I0219 00:13:30.899357 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:13:31 crc kubenswrapper[5109]: I0219 00:13:31.032920 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 19 00:13:31 crc kubenswrapper[5109]: I0219 00:13:31.538856 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 19 00:13:31 crc kubenswrapper[5109]: I0219 00:13:31.757971 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.069873 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.138360 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.180817 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.890556 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.913220 5109 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.939156 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:32 crc kubenswrapper[5109]: I0219 00:13:32.987184 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.042516 5109 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.234332 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.308628 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.363116 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.380594 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.453881 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.460278 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.730153 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.793304 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 19 00:13:33 crc kubenswrapper[5109]: I0219 00:13:33.888853 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.167681 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.174262 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.192110 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.232422 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.288083 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.313286 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.343520 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.522531 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.590402 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.723175 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.801608 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.843369 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.854520 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.855050 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.921238 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.945276 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.980391 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:34 crc kubenswrapper[5109]: I0219 00:13:34.990734 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.180261 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.285303 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.349079 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.361449 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.402030 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.418800 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.552612 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.556052 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.612541 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.616057 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.706965 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.722991 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.852183 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.889771 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.947149 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.983879 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.987864 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 19 00:13:35 crc kubenswrapper[5109]: I0219 00:13:35.999501 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.078838 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.094934 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.273019 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.301001 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.431291 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.445183 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.487507 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.504373 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.519497 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.655124 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.793431 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.885619 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.926910 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:36 crc kubenswrapper[5109]: I0219 00:13:36.942795 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.074286 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.137446 5109 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.144992 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.170446 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.210969 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.256440 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.261824 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.305038 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.361900 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.378592 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.462863 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.484552 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.535002 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.542159 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.608012 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.638036 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.664759 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.701220 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.733174 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.768011 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.848717 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.893715 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.905976 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:37 crc kubenswrapper[5109]: I0219 00:13:37.934040 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.063959 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.089464 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.121244 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.159367 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.180148 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.189661 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.207558 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.245776 5109 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.272141 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.274829 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.338731 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.410478 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.413010 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.466035 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.530293 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.571204 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.574076 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.659939 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.674221 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.699797 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.800207 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.832000 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 19 00:13:38 crc kubenswrapper[5109]: I0219 00:13:38.922603 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.021981 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.086994 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.134817 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.136717 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.194093 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.233376 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.284301 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.300049 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.358834 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.414185 5109 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.416984 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=41.416970989 podStartE2EDuration="41.416970989s" podCreationTimestamp="2026-02-19 00:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:20.803099541 +0000 UTC m=+230.639339540" watchObservedRunningTime="2026-02-19 00:13:39.416970989 +0000 UTC m=+249.253210978" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.418229 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.418271 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.418285 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66","openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.418463 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" podUID="2081eb95-0888-4007-a776-c0d49ad86851" containerName="controller-manager" containerID="cri-o://87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1" gracePeriod=30 Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.419083 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" podUID="43bba815-cfda-4121-a857-94c60d92f1fb" containerName="route-controller-manager" containerID="cri-o://22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd" gracePeriod=30 Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.424617 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.450726 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.450838 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.490190 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=19.490158291 podStartE2EDuration="19.490158291s" podCreationTimestamp="2026-02-19 00:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:39.451809748 +0000 UTC m=+249.288049817" watchObservedRunningTime="2026-02-19 00:13:39.490158291 +0000 UTC m=+249.326398320" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.536279 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.570661 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.578111 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.677763 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.710162 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.788243 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.842419 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.870508 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.875046 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.924599 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.963302 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt"] Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964024 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" containerName="installer" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964049 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" containerName="installer" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964061 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43bba815-cfda-4121-a857-94c60d92f1fb" containerName="route-controller-manager" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964070 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="43bba815-cfda-4121-a857-94c60d92f1fb" containerName="route-controller-manager" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964185 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="43bba815-cfda-4121-a857-94c60d92f1fb" containerName="route-controller-manager" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.964203 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="6cdb76ac-846f-4c53-aca6-b0af36fbc9ec" containerName="installer" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.966691 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config\") pod \"43bba815-cfda-4121-a857-94c60d92f1fb\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.966892 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwqrb\" (UniqueName: \"kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb\") pod \"43bba815-cfda-4121-a857-94c60d92f1fb\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.967485 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config" (OuterVolumeSpecName: "config") pod "43bba815-cfda-4121-a857-94c60d92f1fb" (UID: "43bba815-cfda-4121-a857-94c60d92f1fb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.967841 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp\") pod \"43bba815-cfda-4121-a857-94c60d92f1fb\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.967875 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca\") pod \"43bba815-cfda-4121-a857-94c60d92f1fb\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.967925 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert\") pod \"43bba815-cfda-4121-a857-94c60d92f1fb\" (UID: \"43bba815-cfda-4121-a857-94c60d92f1fb\") " Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.968185 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.968527 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp" (OuterVolumeSpecName: "tmp") pod "43bba815-cfda-4121-a857-94c60d92f1fb" (UID: "43bba815-cfda-4121-a857-94c60d92f1fb"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.968537 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca" (OuterVolumeSpecName: "client-ca") pod "43bba815-cfda-4121-a857-94c60d92f1fb" (UID: "43bba815-cfda-4121-a857-94c60d92f1fb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.972586 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.972985 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb" (OuterVolumeSpecName: "kube-api-access-mwqrb") pod "43bba815-cfda-4121-a857-94c60d92f1fb" (UID: "43bba815-cfda-4121-a857-94c60d92f1fb"). InnerVolumeSpecName "kube-api-access-mwqrb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.973109 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "43bba815-cfda-4121-a857-94c60d92f1fb" (UID: "43bba815-cfda-4121-a857-94c60d92f1fb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.977356 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt"] Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.986779 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:13:39 crc kubenswrapper[5109]: I0219 00:13:39.990231 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.046342 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-769d6595b7-qnppb"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.047050 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2081eb95-0888-4007-a776-c0d49ad86851" containerName="controller-manager" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.047072 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2081eb95-0888-4007-a776-c0d49ad86851" containerName="controller-manager" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.047199 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2081eb95-0888-4007-a776-c0d49ad86851" containerName="controller-manager" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.053127 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-769d6595b7-qnppb"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.053290 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.059351 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.068908 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.068967 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069000 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069089 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swssv\" (UniqueName: \"kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069122 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069219 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp\") pod \"2081eb95-0888-4007-a776-c0d49ad86851\" (UID: \"2081eb95-0888-4007-a776-c0d49ad86851\") " Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069344 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmxt\" (UniqueName: \"kubernetes.io/projected/0712da37-cf8f-4f3c-8488-2399422136f1-kube-api-access-2rmxt\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069404 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0712da37-cf8f-4f3c-8488-2399422136f1-tmp\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069444 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-config\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069481 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-client-ca\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069548 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0712da37-cf8f-4f3c-8488-2399422136f1-serving-cert\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069673 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mwqrb\" (UniqueName: \"kubernetes.io/projected/43bba815-cfda-4121-a857-94c60d92f1fb-kube-api-access-mwqrb\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069689 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/43bba815-cfda-4121-a857-94c60d92f1fb-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069702 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/43bba815-cfda-4121-a857-94c60d92f1fb-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069713 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/43bba815-cfda-4121-a857-94c60d92f1fb-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069783 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp" (OuterVolumeSpecName: "tmp") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069868 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca" (OuterVolumeSpecName: "client-ca") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.069925 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.070059 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config" (OuterVolumeSpecName: "config") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.071962 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv" (OuterVolumeSpecName: "kube-api-access-swssv") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "kube-api-access-swssv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.072316 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2081eb95-0888-4007-a776-c0d49ad86851" (UID: "2081eb95-0888-4007-a776-c0d49ad86851"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.105745 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.117408 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.118059 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.130307 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.150321 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.156871 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170204 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-config\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170241 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-client-ca\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170259 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0712da37-cf8f-4f3c-8488-2399422136f1-serving-cert\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170317 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-config\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170333 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7027c698-ea93-4995-aeed-ae9eda3d9897-serving-cert\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170418 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmxt\" (UniqueName: \"kubernetes.io/projected/0712da37-cf8f-4f3c-8488-2399422136f1-kube-api-access-2rmxt\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.170965 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-client-ca\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171044 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0712da37-cf8f-4f3c-8488-2399422136f1-tmp\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171132 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7027c698-ea93-4995-aeed-ae9eda3d9897-tmp\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171186 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vf9q\" (UniqueName: \"kubernetes.io/projected/7027c698-ea93-4995-aeed-ae9eda3d9897-kube-api-access-8vf9q\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171243 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-proxy-ca-bundles\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171430 5109 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171497 5109 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-client-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171511 5109 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2081eb95-0888-4007-a776-c0d49ad86851-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171528 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-swssv\" (UniqueName: \"kubernetes.io/projected/2081eb95-0888-4007-a776-c0d49ad86851-kube-api-access-swssv\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171541 5109 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2081eb95-0888-4007-a776-c0d49ad86851-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171553 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2081eb95-0888-4007-a776-c0d49ad86851-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171622 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0712da37-cf8f-4f3c-8488-2399422136f1-tmp\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.171735 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-client-ca\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.172374 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0712da37-cf8f-4f3c-8488-2399422136f1-config\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.175820 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0712da37-cf8f-4f3c-8488-2399422136f1-serving-cert\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.187872 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmxt\" (UniqueName: \"kubernetes.io/projected/0712da37-cf8f-4f3c-8488-2399422136f1-kube-api-access-2rmxt\") pod \"route-controller-manager-758666cb5c-76ktt\" (UID: \"0712da37-cf8f-4f3c-8488-2399422136f1\") " pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.215863 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.230248 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272217 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7027c698-ea93-4995-aeed-ae9eda3d9897-tmp\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272287 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8vf9q\" (UniqueName: \"kubernetes.io/projected/7027c698-ea93-4995-aeed-ae9eda3d9897-kube-api-access-8vf9q\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272572 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-proxy-ca-bundles\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272766 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-config\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272849 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7027c698-ea93-4995-aeed-ae9eda3d9897-serving-cert\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.272984 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-client-ca\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.274110 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7027c698-ea93-4995-aeed-ae9eda3d9897-tmp\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.274703 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-client-ca\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.274906 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-config\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.275861 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7027c698-ea93-4995-aeed-ae9eda3d9897-proxy-ca-bundles\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.278369 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7027c698-ea93-4995-aeed-ae9eda3d9897-serving-cert\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.281387 5109 generic.go:358] "Generic (PLEG): container finished" podID="43bba815-cfda-4121-a857-94c60d92f1fb" containerID="22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd" exitCode=0 Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.281449 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" event={"ID":"43bba815-cfda-4121-a857-94c60d92f1fb","Type":"ContainerDied","Data":"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd"} Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.281519 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" event={"ID":"43bba815-cfda-4121-a857-94c60d92f1fb","Type":"ContainerDied","Data":"40087f6ca4eeb708554521e7a23c4ea8f5b9f22c5e31629341f8981f7372587b"} Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.281533 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.281553 5109 scope.go:117] "RemoveContainer" containerID="22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.284971 5109 generic.go:358] "Generic (PLEG): container finished" podID="2081eb95-0888-4007-a776-c0d49ad86851" containerID="87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1" exitCode=0 Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.285101 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" event={"ID":"2081eb95-0888-4007-a776-c0d49ad86851","Type":"ContainerDied","Data":"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1"} Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.285146 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" event={"ID":"2081eb95-0888-4007-a776-c0d49ad86851","Type":"ContainerDied","Data":"ac9951ae2e8edf0941d50f496da0f7f49b8da2bcfe78b1f1bbd6a5c692831b64"} Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.285189 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5549fcb785-8z6q8" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.300099 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.301930 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vf9q\" (UniqueName: \"kubernetes.io/projected/7027c698-ea93-4995-aeed-ae9eda3d9897-kube-api-access-8vf9q\") pod \"controller-manager-769d6595b7-qnppb\" (UID: \"7027c698-ea93-4995-aeed-ae9eda3d9897\") " pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.303694 5109 scope.go:117] "RemoveContainer" containerID="22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd" Feb 19 00:13:40 crc kubenswrapper[5109]: E0219 00:13:40.304063 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd\": container with ID starting with 22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd not found: ID does not exist" containerID="22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.304099 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd"} err="failed to get container status \"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd\": rpc error: code = NotFound desc = could not find container \"22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd\": container with ID starting with 22a86e95420b2103c2984c5e25124d7afb689ec24dd0603eb583a72b9b803efd not found: ID does not exist" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.304122 5109 scope.go:117] "RemoveContainer" containerID="87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.313237 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.313830 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.336162 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.345082 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76975c9bd5-zmk66"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.346428 5109 scope.go:117] "RemoveContainer" containerID="87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1" Feb 19 00:13:40 crc kubenswrapper[5109]: E0219 00:13:40.346941 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1\": container with ID starting with 87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1 not found: ID does not exist" containerID="87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.346974 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1"} err="failed to get container status \"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1\": rpc error: code = NotFound desc = could not find container \"87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1\": container with ID starting with 87286797acebf73f8e501ea79407719a8aba8e922db15001824f8d743faf4bc1 not found: ID does not exist" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.350065 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.354871 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5549fcb785-8z6q8"] Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.367130 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.367346 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.411463 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.683095 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.688828 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.741121 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.751592 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.767264 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.772609 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.776466 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.824439 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 19 00:13:40 crc kubenswrapper[5109]: I0219 00:13:40.914936 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.000249 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2081eb95-0888-4007-a776-c0d49ad86851" path="/var/lib/kubelet/pods/2081eb95-0888-4007-a776-c0d49ad86851/volumes" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.001039 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43bba815-cfda-4121-a857-94c60d92f1fb" path="/var/lib/kubelet/pods/43bba815-cfda-4121-a857-94c60d92f1fb/volumes" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.152343 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.229927 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.291116 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.365330 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.402424 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.444987 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.445692 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.603706 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.605873 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.657596 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.693698 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.720796 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.733212 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.889591 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-769d6595b7-qnppb"] Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.949386 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt"] Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.967420 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 19 00:13:41 crc kubenswrapper[5109]: I0219 00:13:41.977476 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.037941 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.105557 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.124118 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.148432 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.166078 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.172300 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.217070 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.284770 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.287190 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.297183 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" event={"ID":"0712da37-cf8f-4f3c-8488-2399422136f1","Type":"ContainerStarted","Data":"28bcd3688ad49dd38649e8e52d4ec5c4bc620dbddfba8e73a9ff1a2b21b25c96"} Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.297220 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" event={"ID":"0712da37-cf8f-4f3c-8488-2399422136f1","Type":"ContainerStarted","Data":"6942db8ce35ab26d5241733cb6a4cc1ecff62c65fc15af8b4bd63475fea16c47"} Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.297376 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.298384 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" event={"ID":"7027c698-ea93-4995-aeed-ae9eda3d9897","Type":"ContainerStarted","Data":"8a7773753903b8d58011f1d92a0c4417103632f901e8f67d4df9744ff0ba25aa"} Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.298410 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" event={"ID":"7027c698-ea93-4995-aeed-ae9eda3d9897","Type":"ContainerStarted","Data":"3c3225949e5f2a67bf65b67f16f859a80514278f94acd13cebe925a8edd1b378"} Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.299267 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.327060 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.327725 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" podStartSLOduration=5.327697282 podStartE2EDuration="5.327697282s" podCreationTimestamp="2026-02-19 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:42.318119626 +0000 UTC m=+252.154359635" watchObservedRunningTime="2026-02-19 00:13:42.327697282 +0000 UTC m=+252.163937331" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.343251 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" podStartSLOduration=5.343229245 podStartE2EDuration="5.343229245s" podCreationTimestamp="2026-02-19 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:42.342927385 +0000 UTC m=+252.179167414" watchObservedRunningTime="2026-02-19 00:13:42.343229245 +0000 UTC m=+252.179469234" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.345672 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.349075 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.368179 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.444196 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.522341 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.522958 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.553911 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.601687 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.640760 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.683099 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.697110 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-769d6595b7-qnppb" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.704775 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.710279 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.722764 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.744452 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.756713 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.781174 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.781759 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.786998 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.808039 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.824289 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.854834 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.875604 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.877512 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.889174 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.890872 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:13:42 crc kubenswrapper[5109]: I0219 00:13:42.906787 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.125096 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.125184 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.127662 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.146248 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-758666cb5c-76ktt" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.192450 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.324416 5109 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.421419 5109 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.421860 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246" gracePeriod=5 Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.425327 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.497238 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.641756 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 19 00:13:43 crc kubenswrapper[5109]: I0219 00:13:43.912926 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.203716 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.445347 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.704876 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.742290 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.767288 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.799681 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.878579 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.903470 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.940045 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 19 00:13:44 crc kubenswrapper[5109]: I0219 00:13:44.948806 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.071950 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.196504 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.262781 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.270221 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.343255 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.439937 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.447751 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.529121 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.805696 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.866313 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 19 00:13:45 crc kubenswrapper[5109]: I0219 00:13:45.990908 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.072893 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.129462 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.186013 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.265857 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.452085 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.460997 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.780581 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 19 00:13:46 crc kubenswrapper[5109]: I0219 00:13:46.855529 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 19 00:13:47 crc kubenswrapper[5109]: I0219 00:13:47.121995 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 19 00:13:48 crc kubenswrapper[5109]: I0219 00:13:48.289714 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:13:48 crc kubenswrapper[5109]: I0219 00:13:48.289824 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:13:48 crc kubenswrapper[5109]: I0219 00:13:48.605937 5109 ???:1] "http: TLS handshake error from 192.168.126.11:43982: no serving certificate available for the kubelet" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.004527 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.004603 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.088519 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.088568 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.088623 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.088727 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.088749 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.089022 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.089066 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.089091 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.089501 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.097987 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.190107 5109 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.190136 5109 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.190148 5109 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.190157 5109 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.190166 5109 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.343819 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.343889 5109 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246" exitCode=137 Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.343971 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.344074 5109 scope.go:117] "RemoveContainer" containerID="0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.365502 5109 scope.go:117] "RemoveContainer" containerID="0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246" Feb 19 00:13:49 crc kubenswrapper[5109]: E0219 00:13:49.366055 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246\": container with ID starting with 0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246 not found: ID does not exist" containerID="0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246" Feb 19 00:13:49 crc kubenswrapper[5109]: I0219 00:13:49.366238 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246"} err="failed to get container status \"0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246\": rpc error: code = NotFound desc = could not find container \"0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246\": container with ID starting with 0e65b7299c79408b67cf1cdad3874a3ae8402c2136a7b4602d81ec3a4f725246 not found: ID does not exist" Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.000849 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.001507 5109 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.011707 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.011734 5109 kubelet.go:2759] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d2f27850-8656-4bd4-813d-753b9f7dfb5a" Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.015863 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 19 00:13:51 crc kubenswrapper[5109]: I0219 00:13:51.015903 5109 kubelet.go:2784] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="d2f27850-8656-4bd4-813d-753b9f7dfb5a" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.594656 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.595628 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8t8gx" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="registry-server" containerID="cri-o://9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02" gracePeriod=30 Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.604232 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.604704 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xsg6d" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="registry-server" containerID="cri-o://5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55" gracePeriod=30 Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.610014 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.610382 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" containerID="cri-o://b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3" gracePeriod=30 Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.622238 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.622504 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jz24j" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="registry-server" containerID="cri-o://20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a" gracePeriod=30 Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.629527 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g5j87"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.630309 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.630331 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.630473 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.644730 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.644917 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.645024 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jzxr2" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="registry-server" containerID="cri-o://81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a" gracePeriod=30 Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.647266 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g5j87"] Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.689421 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.689482 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwxw9\" (UniqueName: \"kubernetes.io/projected/d2efb82a-1039-47d1-9e51-102e80733bac-kube-api-access-gwxw9\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.689515 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2efb82a-1039-47d1-9e51-102e80733bac-tmp\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.689670 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.790754 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.791035 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.791059 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwxw9\" (UniqueName: \"kubernetes.io/projected/d2efb82a-1039-47d1-9e51-102e80733bac-kube-api-access-gwxw9\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.791089 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2efb82a-1039-47d1-9e51-102e80733bac-tmp\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.791722 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d2efb82a-1039-47d1-9e51-102e80733bac-tmp\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.792558 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.800209 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d2efb82a-1039-47d1-9e51-102e80733bac-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:56 crc kubenswrapper[5109]: I0219 00:13:56.820459 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwxw9\" (UniqueName: \"kubernetes.io/projected/d2efb82a-1039-47d1-9e51-102e80733bac-kube-api-access-gwxw9\") pod \"marketplace-operator-547dbd544d-g5j87\" (UID: \"d2efb82a-1039-47d1-9e51-102e80733bac\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.015161 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.043019 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.070907 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.083296 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.091677 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100186 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5bsr\" (UniqueName: \"kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr\") pod \"0ef4c094-cbdf-4990-8969-504112bbfa28\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100235 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content\") pod \"43671b9e-b630-4d24-b0d0-67940647761e\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100256 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td46f\" (UniqueName: \"kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f\") pod \"733d45f4-d790-461d-b86e-51a69aeceeb7\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100283 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp\") pod \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100299 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities\") pod \"43671b9e-b630-4d24-b0d0-67940647761e\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100370 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24vbc\" (UniqueName: \"kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc\") pod \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100402 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics\") pod \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100435 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities\") pod \"733d45f4-d790-461d-b86e-51a69aeceeb7\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100469 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95tfx\" (UniqueName: \"kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx\") pod \"43671b9e-b630-4d24-b0d0-67940647761e\" (UID: \"43671b9e-b630-4d24-b0d0-67940647761e\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100494 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content\") pod \"0ef4c094-cbdf-4990-8969-504112bbfa28\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100509 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities\") pod \"0ef4c094-cbdf-4990-8969-504112bbfa28\" (UID: \"0ef4c094-cbdf-4990-8969-504112bbfa28\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100524 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content\") pod \"733d45f4-d790-461d-b86e-51a69aeceeb7\" (UID: \"733d45f4-d790-461d-b86e-51a69aeceeb7\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.100570 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") pod \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\" (UID: \"dd92fdf2-3d74-4fac-af8c-c7fe7b025492\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.101414 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp" (OuterVolumeSpecName: "tmp") pod "dd92fdf2-3d74-4fac-af8c-c7fe7b025492" (UID: "dd92fdf2-3d74-4fac-af8c-c7fe7b025492"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.102705 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities" (OuterVolumeSpecName: "utilities") pod "43671b9e-b630-4d24-b0d0-67940647761e" (UID: "43671b9e-b630-4d24-b0d0-67940647761e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.103964 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "dd92fdf2-3d74-4fac-af8c-c7fe7b025492" (UID: "dd92fdf2-3d74-4fac-af8c-c7fe7b025492"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.104358 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities" (OuterVolumeSpecName: "utilities") pod "733d45f4-d790-461d-b86e-51a69aeceeb7" (UID: "733d45f4-d790-461d-b86e-51a69aeceeb7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.106820 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc" (OuterVolumeSpecName: "kube-api-access-24vbc") pod "dd92fdf2-3d74-4fac-af8c-c7fe7b025492" (UID: "dd92fdf2-3d74-4fac-af8c-c7fe7b025492"). InnerVolumeSpecName "kube-api-access-24vbc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.107465 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.107889 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f" (OuterVolumeSpecName: "kube-api-access-td46f") pod "733d45f4-d790-461d-b86e-51a69aeceeb7" (UID: "733d45f4-d790-461d-b86e-51a69aeceeb7"). InnerVolumeSpecName "kube-api-access-td46f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.108125 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "dd92fdf2-3d74-4fac-af8c-c7fe7b025492" (UID: "dd92fdf2-3d74-4fac-af8c-c7fe7b025492"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.110424 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities" (OuterVolumeSpecName: "utilities") pod "0ef4c094-cbdf-4990-8969-504112bbfa28" (UID: "0ef4c094-cbdf-4990-8969-504112bbfa28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.112357 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr" (OuterVolumeSpecName: "kube-api-access-q5bsr") pod "0ef4c094-cbdf-4990-8969-504112bbfa28" (UID: "0ef4c094-cbdf-4990-8969-504112bbfa28"). InnerVolumeSpecName "kube-api-access-q5bsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.118396 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx" (OuterVolumeSpecName: "kube-api-access-95tfx") pod "43671b9e-b630-4d24-b0d0-67940647761e" (UID: "43671b9e-b630-4d24-b0d0-67940647761e"). InnerVolumeSpecName "kube-api-access-95tfx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.139110 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ef4c094-cbdf-4990-8969-504112bbfa28" (UID: "0ef4c094-cbdf-4990-8969-504112bbfa28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.159430 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43671b9e-b630-4d24-b0d0-67940647761e" (UID: "43671b9e-b630-4d24-b0d0-67940647761e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.201704 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content\") pod \"456ecd34-4fb1-495e-8a80-69dd40435de6\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.202963 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhxg2\" (UniqueName: \"kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2\") pod \"456ecd34-4fb1-495e-8a80-69dd40435de6\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.203556 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities\") pod \"456ecd34-4fb1-495e-8a80-69dd40435de6\" (UID: \"456ecd34-4fb1-495e-8a80-69dd40435de6\") " Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.204907 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities" (OuterVolumeSpecName: "utilities") pod "456ecd34-4fb1-495e-8a80-69dd40435de6" (UID: "456ecd34-4fb1-495e-8a80-69dd40435de6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.206060 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2" (OuterVolumeSpecName: "kube-api-access-vhxg2") pod "456ecd34-4fb1-495e-8a80-69dd40435de6" (UID: "456ecd34-4fb1-495e-8a80-69dd40435de6"). InnerVolumeSpecName "kube-api-access-vhxg2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.209195 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.209891 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vhxg2\" (UniqueName: \"kubernetes.io/projected/456ecd34-4fb1-495e-8a80-69dd40435de6-kube-api-access-vhxg2\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.210226 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.210722 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24vbc\" (UniqueName: \"kubernetes.io/projected/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-kube-api-access-24vbc\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.211185 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.211681 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.211966 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-95tfx\" (UniqueName: \"kubernetes.io/projected/43671b9e-b630-4d24-b0d0-67940647761e-kube-api-access-95tfx\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.212345 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.212758 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef4c094-cbdf-4990-8969-504112bbfa28-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.213044 5109 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.213461 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q5bsr\" (UniqueName: \"kubernetes.io/projected/0ef4c094-cbdf-4990-8969-504112bbfa28-kube-api-access-q5bsr\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.213868 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43671b9e-b630-4d24-b0d0-67940647761e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.214644 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-td46f\" (UniqueName: \"kubernetes.io/projected/733d45f4-d790-461d-b86e-51a69aeceeb7-kube-api-access-td46f\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.214808 5109 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dd92fdf2-3d74-4fac-af8c-c7fe7b025492-tmp\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.243042 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "733d45f4-d790-461d-b86e-51a69aeceeb7" (UID: "733d45f4-d790-461d-b86e-51a69aeceeb7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.257121 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "456ecd34-4fb1-495e-8a80-69dd40435de6" (UID: "456ecd34-4fb1-495e-8a80-69dd40435de6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.316710 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/456ecd34-4fb1-495e-8a80-69dd40435de6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.316758 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/733d45f4-d790-461d-b86e-51a69aeceeb7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.390365 5109 generic.go:358] "Generic (PLEG): container finished" podID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerID="20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a" exitCode=0 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.390411 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerDied","Data":"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.390474 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jz24j" event={"ID":"0ef4c094-cbdf-4990-8969-504112bbfa28","Type":"ContainerDied","Data":"ff3419393eadae8278a8ad6cbf81a43e0a8b9900cb468aad9d42828e6759678b"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.390504 5109 scope.go:117] "RemoveContainer" containerID="20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.390519 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jz24j" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.394481 5109 generic.go:358] "Generic (PLEG): container finished" podID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerID="5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55" exitCode=0 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.394578 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerDied","Data":"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.394609 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xsg6d" event={"ID":"456ecd34-4fb1-495e-8a80-69dd40435de6","Type":"ContainerDied","Data":"a6825257268dcbd77fbd555ba6379754b45cb3ba980f7b3a8a295b6220d38087"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.394614 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xsg6d" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.399128 5109 generic.go:358] "Generic (PLEG): container finished" podID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerID="81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a" exitCode=0 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.399198 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerDied","Data":"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.399221 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jzxr2" event={"ID":"733d45f4-d790-461d-b86e-51a69aeceeb7","Type":"ContainerDied","Data":"366c95bd213d2ddb38b36bf2e2a71a54a5e6f479f6f075b7340381a0e6fe24ce"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.399411 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jzxr2" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.402076 5109 generic.go:358] "Generic (PLEG): container finished" podID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerID="b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3" exitCode=0 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.402252 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" event={"ID":"dd92fdf2-3d74-4fac-af8c-c7fe7b025492","Type":"ContainerDied","Data":"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.402295 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" event={"ID":"dd92fdf2-3d74-4fac-af8c-c7fe7b025492","Type":"ContainerDied","Data":"e0bf08f408eb0008b939137485c837a009523fe04a20c4fb60a51e6049f7f4b6"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.402427 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.407715 5109 generic.go:358] "Generic (PLEG): container finished" podID="43671b9e-b630-4d24-b0d0-67940647761e" containerID="9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02" exitCode=0 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.407938 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerDied","Data":"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.408077 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8t8gx" event={"ID":"43671b9e-b630-4d24-b0d0-67940647761e","Type":"ContainerDied","Data":"4cbd020d08030ba595be2c79bef92d58f137de3069d4718693039d0e34f52fab"} Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.408166 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8t8gx" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.419521 5109 scope.go:117] "RemoveContainer" containerID="0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.428725 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.431895 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jz24j"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.441582 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-g5j87"] Feb 19 00:13:57 crc kubenswrapper[5109]: W0219 00:13:57.457195 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2efb82a_1039_47d1_9e51_102e80733bac.slice/crio-1c356491fea4acfc8ca0459b208317ea7317f8bb458d0b340245b0c82dba6895 WatchSource:0}: Error finding container 1c356491fea4acfc8ca0459b208317ea7317f8bb458d0b340245b0c82dba6895: Status 404 returned error can't find the container with id 1c356491fea4acfc8ca0459b208317ea7317f8bb458d0b340245b0c82dba6895 Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.459073 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.463929 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xsg6d"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.474837 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.481807 5109 scope.go:117] "RemoveContainer" containerID="e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.482273 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jzxr2"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.485723 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.489824 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-ddddh"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.500094 5109 scope.go:117] "RemoveContainer" containerID="20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.500597 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a\": container with ID starting with 20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a not found: ID does not exist" containerID="20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.500664 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a"} err="failed to get container status \"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a\": rpc error: code = NotFound desc = could not find container \"20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a\": container with ID starting with 20bf62619b05845d7c7a33287613f09b09d7e702e823828b8af08733b77ac54a not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.500694 5109 scope.go:117] "RemoveContainer" containerID="0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.501274 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b\": container with ID starting with 0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b not found: ID does not exist" containerID="0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.501305 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b"} err="failed to get container status \"0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b\": rpc error: code = NotFound desc = could not find container \"0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b\": container with ID starting with 0045cebb426ead83e7c1fc67043ced8bb639ae0e24ebbde5c0288981efecaf2b not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.501327 5109 scope.go:117] "RemoveContainer" containerID="e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.501586 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618\": container with ID starting with e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618 not found: ID does not exist" containerID="e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.501658 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618"} err="failed to get container status \"e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618\": rpc error: code = NotFound desc = could not find container \"e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618\": container with ID starting with e415c7fa337f07a0974ea112c3aa2bfee89a805da088a700c4dfd193eef33618 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.501694 5109 scope.go:117] "RemoveContainer" containerID="5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.506442 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.506464 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8t8gx"] Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.521690 5109 scope.go:117] "RemoveContainer" containerID="0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.535954 5109 scope.go:117] "RemoveContainer" containerID="d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.559786 5109 scope.go:117] "RemoveContainer" containerID="5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.560270 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55\": container with ID starting with 5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55 not found: ID does not exist" containerID="5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560310 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55"} err="failed to get container status \"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55\": rpc error: code = NotFound desc = could not find container \"5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55\": container with ID starting with 5d9767ab772df4b32e17d4504e14056a9521a92d0f7c520448ac87ebe3ca6b55 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560336 5109 scope.go:117] "RemoveContainer" containerID="0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.560609 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4\": container with ID starting with 0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4 not found: ID does not exist" containerID="0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560651 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4"} err="failed to get container status \"0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4\": rpc error: code = NotFound desc = could not find container \"0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4\": container with ID starting with 0aefde4d823f9169e1ce5c656b01c25783d69be8e1f582c6e2e2c5429c74def4 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560664 5109 scope.go:117] "RemoveContainer" containerID="d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.560905 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36\": container with ID starting with d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36 not found: ID does not exist" containerID="d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560925 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36"} err="failed to get container status \"d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36\": rpc error: code = NotFound desc = could not find container \"d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36\": container with ID starting with d16e8aaf4938d966fe9e2f9bc307ed695258aa0a09941f6f91676491f0ea5a36 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.560939 5109 scope.go:117] "RemoveContainer" containerID="81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.573243 5109 scope.go:117] "RemoveContainer" containerID="af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.596687 5109 scope.go:117] "RemoveContainer" containerID="22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.612283 5109 scope.go:117] "RemoveContainer" containerID="81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.612757 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a\": container with ID starting with 81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a not found: ID does not exist" containerID="81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.612788 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a"} err="failed to get container status \"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a\": rpc error: code = NotFound desc = could not find container \"81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a\": container with ID starting with 81d8190044f27623a8640d30df3674896b630b8f73d55805fb0ecabd67fdc25a not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.612808 5109 scope.go:117] "RemoveContainer" containerID="af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.613079 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de\": container with ID starting with af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de not found: ID does not exist" containerID="af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.613116 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de"} err="failed to get container status \"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de\": rpc error: code = NotFound desc = could not find container \"af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de\": container with ID starting with af27ef3131114b914148ef62e627884d59cadd91d47d9c5ad8071bda21e4a3de not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.613142 5109 scope.go:117] "RemoveContainer" containerID="22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.613419 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1\": container with ID starting with 22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1 not found: ID does not exist" containerID="22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.613439 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1"} err="failed to get container status \"22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1\": rpc error: code = NotFound desc = could not find container \"22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1\": container with ID starting with 22a1548ad9843f4198fa3a3f749b4fcb98bd560278bd8576f920a81415e673b1 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.613451 5109 scope.go:117] "RemoveContainer" containerID="b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.626713 5109 scope.go:117] "RemoveContainer" containerID="b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.627360 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3\": container with ID starting with b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3 not found: ID does not exist" containerID="b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.627401 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3"} err="failed to get container status \"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3\": rpc error: code = NotFound desc = could not find container \"b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3\": container with ID starting with b15b3eedea936054df80a485da564980246b36743cf7daa9d1908bf58f224ff3 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.627428 5109 scope.go:117] "RemoveContainer" containerID="9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.641124 5109 scope.go:117] "RemoveContainer" containerID="13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.656928 5109 scope.go:117] "RemoveContainer" containerID="06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.669579 5109 scope.go:117] "RemoveContainer" containerID="9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.669976 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02\": container with ID starting with 9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02 not found: ID does not exist" containerID="9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.670024 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02"} err="failed to get container status \"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02\": rpc error: code = NotFound desc = could not find container \"9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02\": container with ID starting with 9be05771224e01b7285fee0c57c883f3d60c292030b1b95b9dfc42d4dd579f02 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.670060 5109 scope.go:117] "RemoveContainer" containerID="13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.670424 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7\": container with ID starting with 13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7 not found: ID does not exist" containerID="13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.670492 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7"} err="failed to get container status \"13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7\": rpc error: code = NotFound desc = could not find container \"13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7\": container with ID starting with 13da1fd91a1daa242f295b650456728fb9495c1e275cca6e7f6f98c92138b3c7 not found: ID does not exist" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.670524 5109 scope.go:117] "RemoveContainer" containerID="06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48" Feb 19 00:13:57 crc kubenswrapper[5109]: E0219 00:13:57.670860 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48\": container with ID starting with 06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48 not found: ID does not exist" containerID="06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48" Feb 19 00:13:57 crc kubenswrapper[5109]: I0219 00:13:57.670897 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48"} err="failed to get container status \"06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48\": rpc error: code = NotFound desc = could not find container \"06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48\": container with ID starting with 06d543336bb8d15d16936c88c89ab50e5b833a787bbd33ef48b4f574f1056d48 not found: ID does not exist" Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.053199 5109 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-ddddh container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.053302 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-ddddh" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.415662 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" event={"ID":"d2efb82a-1039-47d1-9e51-102e80733bac","Type":"ContainerStarted","Data":"86ded493277f4d26d5d5e2997e7b11b480ae4b0bb6787acf8ea2d8aebb2fbb10"} Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.415699 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" event={"ID":"d2efb82a-1039-47d1-9e51-102e80733bac","Type":"ContainerStarted","Data":"1c356491fea4acfc8ca0459b208317ea7317f8bb458d0b340245b0c82dba6895"} Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.416058 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.420053 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" Feb 19 00:13:58 crc kubenswrapper[5109]: I0219 00:13:58.431466 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-g5j87" podStartSLOduration=2.431449557 podStartE2EDuration="2.431449557s" podCreationTimestamp="2026-02-19 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:13:58.430605092 +0000 UTC m=+268.266845071" watchObservedRunningTime="2026-02-19 00:13:58.431449557 +0000 UTC m=+268.267689546" Feb 19 00:13:59 crc kubenswrapper[5109]: I0219 00:13:59.000991 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" path="/var/lib/kubelet/pods/0ef4c094-cbdf-4990-8969-504112bbfa28/volumes" Feb 19 00:13:59 crc kubenswrapper[5109]: I0219 00:13:59.002541 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43671b9e-b630-4d24-b0d0-67940647761e" path="/var/lib/kubelet/pods/43671b9e-b630-4d24-b0d0-67940647761e/volumes" Feb 19 00:13:59 crc kubenswrapper[5109]: I0219 00:13:59.003305 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" path="/var/lib/kubelet/pods/456ecd34-4fb1-495e-8a80-69dd40435de6/volumes" Feb 19 00:13:59 crc kubenswrapper[5109]: I0219 00:13:59.004524 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" path="/var/lib/kubelet/pods/733d45f4-d790-461d-b86e-51a69aeceeb7/volumes" Feb 19 00:13:59 crc kubenswrapper[5109]: I0219 00:13:59.005364 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" path="/var/lib/kubelet/pods/dd92fdf2-3d74-4fac-af8c-c7fe7b025492/volumes" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.176774 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524334-7q274"] Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177377 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177391 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177405 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177413 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177424 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177432 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177445 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177452 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177462 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177470 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177480 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177488 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177500 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177507 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177516 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177523 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177534 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177542 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177553 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177561 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="extract-utilities" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177572 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177580 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177598 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177605 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177652 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177663 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="extract-content" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177787 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="456ecd34-4fb1-495e-8a80-69dd40435de6" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177803 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="733d45f4-d790-461d-b86e-51a69aeceeb7" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177814 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="0ef4c094-cbdf-4990-8969-504112bbfa28" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177828 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd92fdf2-3d74-4fac-af8c-c7fe7b025492" containerName="marketplace-operator" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.177841 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="43671b9e-b630-4d24-b0d0-67940647761e" containerName="registry-server" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.187652 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524334-7q274"] Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.187794 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.191138 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.191865 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.254590 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw6zb\" (UniqueName: \"kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb\") pod \"auto-csr-approver-29524334-7q274\" (UID: \"d00d4e95-b25c-4c66-8a47-ebc62d3669f8\") " pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.357001 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xw6zb\" (UniqueName: \"kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb\") pod \"auto-csr-approver-29524334-7q274\" (UID: \"d00d4e95-b25c-4c66-8a47-ebc62d3669f8\") " pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.380318 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw6zb\" (UniqueName: \"kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb\") pod \"auto-csr-approver-29524334-7q274\" (UID: \"d00d4e95-b25c-4c66-8a47-ebc62d3669f8\") " pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.508141 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:00 crc kubenswrapper[5109]: I0219 00:14:00.912274 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524334-7q274"] Feb 19 00:14:00 crc kubenswrapper[5109]: W0219 00:14:00.921033 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd00d4e95_b25c_4c66_8a47_ebc62d3669f8.slice/crio-822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9 WatchSource:0}: Error finding container 822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9: Status 404 returned error can't find the container with id 822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9 Feb 19 00:14:01 crc kubenswrapper[5109]: I0219 00:14:01.437994 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524334-7q274" event={"ID":"d00d4e95-b25c-4c66-8a47-ebc62d3669f8","Type":"ContainerStarted","Data":"822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9"} Feb 19 00:14:04 crc kubenswrapper[5109]: I0219 00:14:04.452735 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524334-7q274" event={"ID":"d00d4e95-b25c-4c66-8a47-ebc62d3669f8","Type":"ContainerStarted","Data":"14bf90bd26dc86e7e6b3251ec822d8527b75af6f5e1117fb11fba74b4b5cf44d"} Feb 19 00:14:04 crc kubenswrapper[5109]: I0219 00:14:04.472173 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524334-7q274" podStartSLOduration=1.420647895 podStartE2EDuration="4.472158278s" podCreationTimestamp="2026-02-19 00:14:00 +0000 UTC" firstStartedPulling="2026-02-19 00:14:00.92356621 +0000 UTC m=+270.759806239" lastFinishedPulling="2026-02-19 00:14:03.975076473 +0000 UTC m=+273.811316622" observedRunningTime="2026-02-19 00:14:04.470657654 +0000 UTC m=+274.306897643" watchObservedRunningTime="2026-02-19 00:14:04.472158278 +0000 UTC m=+274.308398257" Feb 19 00:14:04 crc kubenswrapper[5109]: I0219 00:14:04.652005 5109 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-6gspk" Feb 19 00:14:04 crc kubenswrapper[5109]: I0219 00:14:04.673297 5109 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-6gspk" Feb 19 00:14:05 crc kubenswrapper[5109]: I0219 00:14:05.458284 5109 generic.go:358] "Generic (PLEG): container finished" podID="d00d4e95-b25c-4c66-8a47-ebc62d3669f8" containerID="14bf90bd26dc86e7e6b3251ec822d8527b75af6f5e1117fb11fba74b4b5cf44d" exitCode=0 Feb 19 00:14:05 crc kubenswrapper[5109]: I0219 00:14:05.458355 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524334-7q274" event={"ID":"d00d4e95-b25c-4c66-8a47-ebc62d3669f8","Type":"ContainerDied","Data":"14bf90bd26dc86e7e6b3251ec822d8527b75af6f5e1117fb11fba74b4b5cf44d"} Feb 19 00:14:05 crc kubenswrapper[5109]: I0219 00:14:05.674739 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-21 00:09:04 +0000 UTC" deadline="2026-03-13 13:44:54.292474975 +0000 UTC" Feb 19 00:14:05 crc kubenswrapper[5109]: I0219 00:14:05.675100 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="541h30m48.617386386s" Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.675416 5109 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-21 00:09:04 +0000 UTC" deadline="2026-03-16 00:12:43.541615304 +0000 UTC" Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.675458 5109 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="599h58m36.866161482s" Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.764730 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.834517 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw6zb\" (UniqueName: \"kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb\") pod \"d00d4e95-b25c-4c66-8a47-ebc62d3669f8\" (UID: \"d00d4e95-b25c-4c66-8a47-ebc62d3669f8\") " Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.847771 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb" (OuterVolumeSpecName: "kube-api-access-xw6zb") pod "d00d4e95-b25c-4c66-8a47-ebc62d3669f8" (UID: "d00d4e95-b25c-4c66-8a47-ebc62d3669f8"). InnerVolumeSpecName "kube-api-access-xw6zb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:14:06 crc kubenswrapper[5109]: I0219 00:14:06.936944 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xw6zb\" (UniqueName: \"kubernetes.io/projected/d00d4e95-b25c-4c66-8a47-ebc62d3669f8-kube-api-access-xw6zb\") on node \"crc\" DevicePath \"\"" Feb 19 00:14:07 crc kubenswrapper[5109]: I0219 00:14:07.469087 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524334-7q274" Feb 19 00:14:07 crc kubenswrapper[5109]: I0219 00:14:07.469115 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524334-7q274" event={"ID":"d00d4e95-b25c-4c66-8a47-ebc62d3669f8","Type":"ContainerDied","Data":"822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9"} Feb 19 00:14:07 crc kubenswrapper[5109]: I0219 00:14:07.469147 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="822b8bd3d8635583c7fdf01453832935cb93d4ab57c71558885fde0309f62ce9" Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.289846 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.290744 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.290834 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.291844 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.291951 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c" gracePeriod=600 Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.549877 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c" exitCode=0 Feb 19 00:14:18 crc kubenswrapper[5109]: I0219 00:14:18.549988 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c"} Feb 19 00:14:19 crc kubenswrapper[5109]: I0219 00:14:19.561203 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b"} Feb 19 00:14:31 crc kubenswrapper[5109]: I0219 00:14:31.188390 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:14:31 crc kubenswrapper[5109]: I0219 00:14:31.188954 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.388850 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.389981 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d00d4e95-b25c-4c66-8a47-ebc62d3669f8" containerName="oc" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.389996 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d00d4e95-b25c-4c66-8a47-ebc62d3669f8" containerName="oc" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.390089 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d00d4e95-b25c-4c66-8a47-ebc62d3669f8" containerName="oc" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.393948 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.396745 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.401204 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.531136 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st66c\" (UniqueName: \"kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.531227 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.531443 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.633167 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-st66c\" (UniqueName: \"kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.633688 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.633986 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.634186 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.634668 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.668719 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-st66c\" (UniqueName: \"kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c\") pod \"redhat-marketplace-hc754\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.722148 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.805191 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rh2mx"] Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.815678 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.825070 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rh2mx"] Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.938957 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-trusted-ca\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939463 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/611f0238-05ca-4a31-8ee6-2607b3dd7b53-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939520 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7748m\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-kube-api-access-7748m\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939563 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-certificates\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939598 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939670 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939705 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/611f0238-05ca-4a31-8ee6-2607b3dd7b53-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.939727 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-tls\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.967267 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.977173 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5jr5v"] Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.982568 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:58 crc kubenswrapper[5109]: I0219 00:14:58.989991 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.010406 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.010622 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jr5v"] Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041283 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041346 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/611f0238-05ca-4a31-8ee6-2607b3dd7b53-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041370 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-tls\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041409 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-trusted-ca\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041440 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/611f0238-05ca-4a31-8ee6-2607b3dd7b53-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041473 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7748m\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-kube-api-access-7748m\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.041507 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-certificates\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.043179 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/611f0238-05ca-4a31-8ee6-2607b3dd7b53-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.043262 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-certificates\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.044232 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/611f0238-05ca-4a31-8ee6-2607b3dd7b53-trusted-ca\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: W0219 00:14:59.048318 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fafd59c_5273_4f91_8772_cc3a3dd845fa.slice/crio-4381ebaafe5eb3911ecf5259b93f1a829a29bc455aa70967dc13d0596834735b WatchSource:0}: Error finding container 4381ebaafe5eb3911ecf5259b93f1a829a29bc455aa70967dc13d0596834735b: Status 404 returned error can't find the container with id 4381ebaafe5eb3911ecf5259b93f1a829a29bc455aa70967dc13d0596834735b Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.055562 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-registry-tls\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.055949 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/611f0238-05ca-4a31-8ee6-2607b3dd7b53-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.070530 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.072476 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7748m\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-kube-api-access-7748m\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.076091 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/611f0238-05ca-4a31-8ee6-2607b3dd7b53-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rh2mx\" (UID: \"611f0238-05ca-4a31-8ee6-2607b3dd7b53\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.133399 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.142369 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-utilities\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.142417 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k28xp\" (UniqueName: \"kubernetes.io/projected/7015d02a-6aa4-4209-b318-dfc88ebe6d01-kube-api-access-k28xp\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.142546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-catalog-content\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.243494 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-utilities\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.243930 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k28xp\" (UniqueName: \"kubernetes.io/projected/7015d02a-6aa4-4209-b318-dfc88ebe6d01-kube-api-access-k28xp\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.244006 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-catalog-content\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.244149 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-utilities\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.244425 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7015d02a-6aa4-4209-b318-dfc88ebe6d01-catalog-content\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.263002 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k28xp\" (UniqueName: \"kubernetes.io/projected/7015d02a-6aa4-4209-b318-dfc88ebe6d01-kube-api-access-k28xp\") pod \"redhat-operators-5jr5v\" (UID: \"7015d02a-6aa4-4209-b318-dfc88ebe6d01\") " pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.288685 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rh2mx"] Feb 19 00:14:59 crc kubenswrapper[5109]: W0219 00:14:59.300072 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod611f0238_05ca_4a31_8ee6_2607b3dd7b53.slice/crio-c01c0111179d334340dce8876fec19c49022b3babcbfa58cb91eb4c99c45c0ac WatchSource:0}: Error finding container c01c0111179d334340dce8876fec19c49022b3babcbfa58cb91eb4c99c45c0ac: Status 404 returned error can't find the container with id c01c0111179d334340dce8876fec19c49022b3babcbfa58cb91eb4c99c45c0ac Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.316876 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.484067 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5jr5v"] Feb 19 00:14:59 crc kubenswrapper[5109]: W0219 00:14:59.488443 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7015d02a_6aa4_4209_b318_dfc88ebe6d01.slice/crio-e9880761b9f57ecee210a95094df918bbbad1180130725d03e82e2f74fc9194f WatchSource:0}: Error finding container e9880761b9f57ecee210a95094df918bbbad1180130725d03e82e2f74fc9194f: Status 404 returned error can't find the container with id e9880761b9f57ecee210a95094df918bbbad1180130725d03e82e2f74fc9194f Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.819569 5109 generic.go:358] "Generic (PLEG): container finished" podID="7015d02a-6aa4-4209-b318-dfc88ebe6d01" containerID="7de48c0070e591151ac04779ad3edab6e2d9c9f81fd6ed5d412220b31c7748e1" exitCode=0 Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.819697 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jr5v" event={"ID":"7015d02a-6aa4-4209-b318-dfc88ebe6d01","Type":"ContainerDied","Data":"7de48c0070e591151ac04779ad3edab6e2d9c9f81fd6ed5d412220b31c7748e1"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.819734 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jr5v" event={"ID":"7015d02a-6aa4-4209-b318-dfc88ebe6d01","Type":"ContainerStarted","Data":"e9880761b9f57ecee210a95094df918bbbad1180130725d03e82e2f74fc9194f"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.821507 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" event={"ID":"611f0238-05ca-4a31-8ee6-2607b3dd7b53","Type":"ContainerStarted","Data":"ad8f2043b350516ee5d6a2883bc5cc34f26197e943bade7a92eb3aa235c5d9bf"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.821526 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" event={"ID":"611f0238-05ca-4a31-8ee6-2607b3dd7b53","Type":"ContainerStarted","Data":"c01c0111179d334340dce8876fec19c49022b3babcbfa58cb91eb4c99c45c0ac"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.821951 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.824089 5109 generic.go:358] "Generic (PLEG): container finished" podID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerID="a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2" exitCode=0 Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.824121 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerDied","Data":"a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.824144 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerStarted","Data":"4381ebaafe5eb3911ecf5259b93f1a829a29bc455aa70967dc13d0596834735b"} Feb 19 00:14:59 crc kubenswrapper[5109]: I0219 00:14:59.864235 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" podStartSLOduration=1.864198545 podStartE2EDuration="1.864198545s" podCreationTimestamp="2026-02-19 00:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:14:59.858874758 +0000 UTC m=+329.695114757" watchObservedRunningTime="2026-02-19 00:14:59.864198545 +0000 UTC m=+329.700438584" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.152117 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g"] Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.169477 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g"] Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.169715 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.173881 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.174151 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.261969 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82kr\" (UniqueName: \"kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.262171 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.262307 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.363719 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.363788 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r82kr\" (UniqueName: \"kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.363880 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.364798 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.370620 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.378149 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82kr\" (UniqueName: \"kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr\") pod \"collect-profiles-29524335-hsw7g\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.486815 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.581619 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mpr9j"] Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.591254 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.593933 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.595960 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpr9j"] Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.668518 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-catalog-content\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.668680 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-utilities\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.668756 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-kube-api-access-n4gz2\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.708726 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g"] Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.769777 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-catalog-content\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.769830 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-utilities\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.769879 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-kube-api-access-n4gz2\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.770404 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-catalog-content\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.770509 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-utilities\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.790883 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4gz2\" (UniqueName: \"kubernetes.io/projected/2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf-kube-api-access-n4gz2\") pod \"certified-operators-mpr9j\" (UID: \"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf\") " pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.832554 5109 generic.go:358] "Generic (PLEG): container finished" podID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerID="0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d" exitCode=0 Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.832599 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerDied","Data":"0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d"} Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.834593 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" event={"ID":"2e735da9-f644-455d-bad9-be5ab7e542bf","Type":"ContainerStarted","Data":"b4ad0b2222eb4098f2b9aee3954f658a41a36ead175da523c040711c7c18aa46"} Feb 19 00:15:00 crc kubenswrapper[5109]: I0219 00:15:00.907740 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.135728 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mpr9j"] Feb 19 00:15:01 crc kubenswrapper[5109]: W0219 00:15:01.159062 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fcc53fd_7dcd_428b_9e6e_73a42e3c37bf.slice/crio-15b8fa39b2429082b461d13249371e1a7d02fc8498b96d292b99bb17fd11f750 WatchSource:0}: Error finding container 15b8fa39b2429082b461d13249371e1a7d02fc8498b96d292b99bb17fd11f750: Status 404 returned error can't find the container with id 15b8fa39b2429082b461d13249371e1a7d02fc8498b96d292b99bb17fd11f750 Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.573442 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xl49c"] Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.579885 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.581773 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.583245 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xl49c"] Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.628990 5109 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.680689 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5rmk\" (UniqueName: \"kubernetes.io/projected/8e4f1385-5a2a-4098-b0c3-862f0656d43a-kube-api-access-m5rmk\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.680765 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-catalog-content\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.680823 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-utilities\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.782486 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-utilities\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.782579 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m5rmk\" (UniqueName: \"kubernetes.io/projected/8e4f1385-5a2a-4098-b0c3-862f0656d43a-kube-api-access-m5rmk\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.782654 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-catalog-content\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.783464 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-catalog-content\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.783757 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e4f1385-5a2a-4098-b0c3-862f0656d43a-utilities\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.808180 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5rmk\" (UniqueName: \"kubernetes.io/projected/8e4f1385-5a2a-4098-b0c3-862f0656d43a-kube-api-access-m5rmk\") pod \"community-operators-xl49c\" (UID: \"8e4f1385-5a2a-4098-b0c3-862f0656d43a\") " pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.841930 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerStarted","Data":"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe"} Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.843769 5109 generic.go:358] "Generic (PLEG): container finished" podID="2e735da9-f644-455d-bad9-be5ab7e542bf" containerID="fd4e49d2212bc642c9f764422932de2caaa11bcb394879a16d4c128ca16b88d0" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.843874 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" event={"ID":"2e735da9-f644-455d-bad9-be5ab7e542bf","Type":"ContainerDied","Data":"fd4e49d2212bc642c9f764422932de2caaa11bcb394879a16d4c128ca16b88d0"} Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.846018 5109 generic.go:358] "Generic (PLEG): container finished" podID="7015d02a-6aa4-4209-b318-dfc88ebe6d01" containerID="56f502f437551c2ec828248455feae6251f9aa2541040c0bfef96506445e62df" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.846065 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jr5v" event={"ID":"7015d02a-6aa4-4209-b318-dfc88ebe6d01","Type":"ContainerDied","Data":"56f502f437551c2ec828248455feae6251f9aa2541040c0bfef96506445e62df"} Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.848052 5109 generic.go:358] "Generic (PLEG): container finished" podID="2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf" containerID="e5274ef3508463a66fa47d1cf89644e57efb30ee4510d61a825d6eb5c2e088ca" exitCode=0 Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.848992 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpr9j" event={"ID":"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf","Type":"ContainerDied","Data":"e5274ef3508463a66fa47d1cf89644e57efb30ee4510d61a825d6eb5c2e088ca"} Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.849017 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpr9j" event={"ID":"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf","Type":"ContainerStarted","Data":"15b8fa39b2429082b461d13249371e1a7d02fc8498b96d292b99bb17fd11f750"} Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.858617 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hc754" podStartSLOduration=3.14523712 podStartE2EDuration="3.858599369s" podCreationTimestamp="2026-02-19 00:14:58 +0000 UTC" firstStartedPulling="2026-02-19 00:14:59.824727287 +0000 UTC m=+329.660967276" lastFinishedPulling="2026-02-19 00:15:00.538089536 +0000 UTC m=+330.374329525" observedRunningTime="2026-02-19 00:15:01.856557969 +0000 UTC m=+331.692797958" watchObservedRunningTime="2026-02-19 00:15:01.858599369 +0000 UTC m=+331.694839358" Feb 19 00:15:01 crc kubenswrapper[5109]: I0219 00:15:01.895896 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.113496 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xl49c"] Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.854893 5109 generic.go:358] "Generic (PLEG): container finished" podID="8e4f1385-5a2a-4098-b0c3-862f0656d43a" containerID="9b4529ff9b0657accc7fbc09a353d37e559f47bf1ef683884400c0c2720b7ddd" exitCode=0 Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.855305 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl49c" event={"ID":"8e4f1385-5a2a-4098-b0c3-862f0656d43a","Type":"ContainerDied","Data":"9b4529ff9b0657accc7fbc09a353d37e559f47bf1ef683884400c0c2720b7ddd"} Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.855339 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl49c" event={"ID":"8e4f1385-5a2a-4098-b0c3-862f0656d43a","Type":"ContainerStarted","Data":"75c5b8fd9dc11b3c3de9d64f4cc8c8cf81d13fc86342ae8ad8ceee3b1d4e5515"} Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.857715 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5jr5v" event={"ID":"7015d02a-6aa4-4209-b318-dfc88ebe6d01","Type":"ContainerStarted","Data":"3b7abdefd591eaa070c558e8f445cc61e00780afa3cd00cdf497ca94a59a0f60"} Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.861023 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpr9j" event={"ID":"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf","Type":"ContainerStarted","Data":"92130b8f184f58c423e682b10c1830a488a75eb44b07f43ef7df17ca7e1c5e06"} Feb 19 00:15:02 crc kubenswrapper[5109]: I0219 00:15:02.901336 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5jr5v" podStartSLOduration=3.937827666 podStartE2EDuration="4.90131839s" podCreationTimestamp="2026-02-19 00:14:58 +0000 UTC" firstStartedPulling="2026-02-19 00:14:59.82079857 +0000 UTC m=+329.657038589" lastFinishedPulling="2026-02-19 00:15:00.784289324 +0000 UTC m=+330.620529313" observedRunningTime="2026-02-19 00:15:02.900421453 +0000 UTC m=+332.736661452" watchObservedRunningTime="2026-02-19 00:15:02.90131839 +0000 UTC m=+332.737558379" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.100976 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.200328 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82kr\" (UniqueName: \"kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr\") pod \"2e735da9-f644-455d-bad9-be5ab7e542bf\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.200689 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume\") pod \"2e735da9-f644-455d-bad9-be5ab7e542bf\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.200774 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume\") pod \"2e735da9-f644-455d-bad9-be5ab7e542bf\" (UID: \"2e735da9-f644-455d-bad9-be5ab7e542bf\") " Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.201334 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e735da9-f644-455d-bad9-be5ab7e542bf" (UID: "2e735da9-f644-455d-bad9-be5ab7e542bf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.206256 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr" (OuterVolumeSpecName: "kube-api-access-r82kr") pod "2e735da9-f644-455d-bad9-be5ab7e542bf" (UID: "2e735da9-f644-455d-bad9-be5ab7e542bf"). InnerVolumeSpecName "kube-api-access-r82kr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.206425 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e735da9-f644-455d-bad9-be5ab7e542bf" (UID: "2e735da9-f644-455d-bad9-be5ab7e542bf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.302143 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r82kr\" (UniqueName: \"kubernetes.io/projected/2e735da9-f644-455d-bad9-be5ab7e542bf-kube-api-access-r82kr\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.302177 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e735da9-f644-455d-bad9-be5ab7e542bf-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.302186 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e735da9-f644-455d-bad9-be5ab7e542bf-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.868441 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" event={"ID":"2e735da9-f644-455d-bad9-be5ab7e542bf","Type":"ContainerDied","Data":"b4ad0b2222eb4098f2b9aee3954f658a41a36ead175da523c040711c7c18aa46"} Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.868529 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4ad0b2222eb4098f2b9aee3954f658a41a36ead175da523c040711c7c18aa46" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.868460 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524335-hsw7g" Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.870499 5109 generic.go:358] "Generic (PLEG): container finished" podID="2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf" containerID="92130b8f184f58c423e682b10c1830a488a75eb44b07f43ef7df17ca7e1c5e06" exitCode=0 Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.870643 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpr9j" event={"ID":"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf","Type":"ContainerDied","Data":"92130b8f184f58c423e682b10c1830a488a75eb44b07f43ef7df17ca7e1c5e06"} Feb 19 00:15:03 crc kubenswrapper[5109]: I0219 00:15:03.873831 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl49c" event={"ID":"8e4f1385-5a2a-4098-b0c3-862f0656d43a","Type":"ContainerStarted","Data":"9d2e40e8a2cfc4f580fc4374c875b532a2f2365196b36d5ab8cfbe95ba6ad075"} Feb 19 00:15:04 crc kubenswrapper[5109]: I0219 00:15:04.881144 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mpr9j" event={"ID":"2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf","Type":"ContainerStarted","Data":"c8ff9cf4e2d62241e2783ef4409f52edaa8b8842e26d47bd962c1b9f40110a0f"} Feb 19 00:15:04 crc kubenswrapper[5109]: I0219 00:15:04.884010 5109 generic.go:358] "Generic (PLEG): container finished" podID="8e4f1385-5a2a-4098-b0c3-862f0656d43a" containerID="9d2e40e8a2cfc4f580fc4374c875b532a2f2365196b36d5ab8cfbe95ba6ad075" exitCode=0 Feb 19 00:15:04 crc kubenswrapper[5109]: I0219 00:15:04.884106 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl49c" event={"ID":"8e4f1385-5a2a-4098-b0c3-862f0656d43a","Type":"ContainerDied","Data":"9d2e40e8a2cfc4f580fc4374c875b532a2f2365196b36d5ab8cfbe95ba6ad075"} Feb 19 00:15:04 crc kubenswrapper[5109]: I0219 00:15:04.900896 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mpr9j" podStartSLOduration=4.056008664 podStartE2EDuration="4.900872197s" podCreationTimestamp="2026-02-19 00:15:00 +0000 UTC" firstStartedPulling="2026-02-19 00:15:01.849072807 +0000 UTC m=+331.685312796" lastFinishedPulling="2026-02-19 00:15:02.69393633 +0000 UTC m=+332.530176329" observedRunningTime="2026-02-19 00:15:04.897477756 +0000 UTC m=+334.733717765" watchObservedRunningTime="2026-02-19 00:15:04.900872197 +0000 UTC m=+334.737112206" Feb 19 00:15:05 crc kubenswrapper[5109]: I0219 00:15:05.891791 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xl49c" event={"ID":"8e4f1385-5a2a-4098-b0c3-862f0656d43a","Type":"ContainerStarted","Data":"d5fbdfc0e5ba702b316a13f1f79fabb6a192f19adcf6c19b3ecb29a7fbbf0b60"} Feb 19 00:15:05 crc kubenswrapper[5109]: I0219 00:15:05.907183 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xl49c" podStartSLOduration=4.045826628 podStartE2EDuration="4.907154568s" podCreationTimestamp="2026-02-19 00:15:01 +0000 UTC" firstStartedPulling="2026-02-19 00:15:02.856032839 +0000 UTC m=+332.692272828" lastFinishedPulling="2026-02-19 00:15:03.717360779 +0000 UTC m=+333.553600768" observedRunningTime="2026-02-19 00:15:05.906383605 +0000 UTC m=+335.742623604" watchObservedRunningTime="2026-02-19 00:15:05.907154568 +0000 UTC m=+335.743394567" Feb 19 00:15:08 crc kubenswrapper[5109]: I0219 00:15:08.722911 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:15:08 crc kubenswrapper[5109]: I0219 00:15:08.722981 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:15:08 crc kubenswrapper[5109]: I0219 00:15:08.792198 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:15:08 crc kubenswrapper[5109]: I0219 00:15:08.976189 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:15:09 crc kubenswrapper[5109]: I0219 00:15:09.317137 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:15:09 crc kubenswrapper[5109]: I0219 00:15:09.317186 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:15:09 crc kubenswrapper[5109]: I0219 00:15:09.360453 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:15:09 crc kubenswrapper[5109]: I0219 00:15:09.999566 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5jr5v" Feb 19 00:15:10 crc kubenswrapper[5109]: I0219 00:15:10.908242 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:10 crc kubenswrapper[5109]: I0219 00:15:10.908310 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:10 crc kubenswrapper[5109]: I0219 00:15:10.949857 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:11 crc kubenswrapper[5109]: I0219 00:15:11.007404 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mpr9j" Feb 19 00:15:11 crc kubenswrapper[5109]: I0219 00:15:11.896768 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:11 crc kubenswrapper[5109]: I0219 00:15:11.896895 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:11 crc kubenswrapper[5109]: I0219 00:15:11.939115 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:12 crc kubenswrapper[5109]: I0219 00:15:12.986697 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xl49c" Feb 19 00:15:21 crc kubenswrapper[5109]: I0219 00:15:21.856874 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rh2mx" Feb 19 00:15:21 crc kubenswrapper[5109]: I0219 00:15:21.950753 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.003599 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" podUID="bf93c47a-3819-4073-82e5-8bb1c9e73432" containerName="registry" containerID="cri-o://bcab3ba9368fc474aaab0d1f5cab3431f543874abf597cf7f3d2c537a1bc4f2e" gracePeriod=30 Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.169843 5109 generic.go:358] "Generic (PLEG): container finished" podID="bf93c47a-3819-4073-82e5-8bb1c9e73432" containerID="bcab3ba9368fc474aaab0d1f5cab3431f543874abf597cf7f3d2c537a1bc4f2e" exitCode=0 Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.169903 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" event={"ID":"bf93c47a-3819-4073-82e5-8bb1c9e73432","Type":"ContainerDied","Data":"bcab3ba9368fc474aaab0d1f5cab3431f543874abf597cf7f3d2c537a1bc4f2e"} Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.417397 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.451526 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68r6c\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452042 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452200 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452423 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452581 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452751 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.452896 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.453099 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates\") pod \"bf93c47a-3819-4073-82e5-8bb1c9e73432\" (UID: \"bf93c47a-3819-4073-82e5-8bb1c9e73432\") " Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.453389 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.453618 5109 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.454015 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.459752 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.461085 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.464099 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.464598 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c" (OuterVolumeSpecName: "kube-api-access-68r6c") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "kube-api-access-68r6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.468823 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.486243 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bf93c47a-3819-4073-82e5-8bb1c9e73432" (UID: "bf93c47a-3819-4073-82e5-8bb1c9e73432"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555166 5109 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555201 5109 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555213 5109 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bf93c47a-3819-4073-82e5-8bb1c9e73432-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555223 5109 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bf93c47a-3819-4073-82e5-8bb1c9e73432-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555234 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-68r6c\" (UniqueName: \"kubernetes.io/projected/bf93c47a-3819-4073-82e5-8bb1c9e73432-kube-api-access-68r6c\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:47 crc kubenswrapper[5109]: I0219 00:15:47.555241 5109 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bf93c47a-3819-4073-82e5-8bb1c9e73432-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.180334 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.180408 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmk4g" event={"ID":"bf93c47a-3819-4073-82e5-8bb1c9e73432","Type":"ContainerDied","Data":"43ada7017445eee7d68d2255c705ef7029c1bc37e74765d48ebf78e15a42d6ed"} Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.180512 5109 scope.go:117] "RemoveContainer" containerID="bcab3ba9368fc474aaab0d1f5cab3431f543874abf597cf7f3d2c537a1bc4f2e" Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.247499 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.251822 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmk4g"] Feb 19 00:15:48 crc kubenswrapper[5109]: I0219 00:15:48.998535 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf93c47a-3819-4073-82e5-8bb1c9e73432" path="/var/lib/kubelet/pods/bf93c47a-3819-4073-82e5-8bb1c9e73432/volumes" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.135097 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524336-tg2rm"] Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136143 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e735da9-f644-455d-bad9-be5ab7e542bf" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136158 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e735da9-f644-455d-bad9-be5ab7e542bf" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136170 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf93c47a-3819-4073-82e5-8bb1c9e73432" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136176 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf93c47a-3819-4073-82e5-8bb1c9e73432" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136261 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf93c47a-3819-4073-82e5-8bb1c9e73432" containerName="registry" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.136276 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e735da9-f644-455d-bad9-be5ab7e542bf" containerName="collect-profiles" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.139274 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.141778 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.141785 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.147340 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-tg2rm"] Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.221253 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx9vq\" (UniqueName: \"kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq\") pod \"auto-csr-approver-29524336-tg2rm\" (UID: \"095c765b-bd19-495f-a5d2-60abe52b0ee8\") " pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.322517 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dx9vq\" (UniqueName: \"kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq\") pod \"auto-csr-approver-29524336-tg2rm\" (UID: \"095c765b-bd19-495f-a5d2-60abe52b0ee8\") " pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.344195 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx9vq\" (UniqueName: \"kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq\") pod \"auto-csr-approver-29524336-tg2rm\" (UID: \"095c765b-bd19-495f-a5d2-60abe52b0ee8\") " pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.453199 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:00 crc kubenswrapper[5109]: I0219 00:16:00.679455 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-tg2rm"] Feb 19 00:16:00 crc kubenswrapper[5109]: W0219 00:16:00.687532 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095c765b_bd19_495f_a5d2_60abe52b0ee8.slice/crio-eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606 WatchSource:0}: Error finding container eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606: Status 404 returned error can't find the container with id eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606 Feb 19 00:16:01 crc kubenswrapper[5109]: I0219 00:16:01.272898 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" event={"ID":"095c765b-bd19-495f-a5d2-60abe52b0ee8","Type":"ContainerStarted","Data":"eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606"} Feb 19 00:16:02 crc kubenswrapper[5109]: I0219 00:16:02.282748 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" event={"ID":"095c765b-bd19-495f-a5d2-60abe52b0ee8","Type":"ContainerStarted","Data":"8e4b01fcf0a2c53a5946ca2505a369101f0565dcf6a7855a5cc18721e85a4e47"} Feb 19 00:16:02 crc kubenswrapper[5109]: I0219 00:16:02.296478 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" podStartSLOduration=1.066750193 podStartE2EDuration="2.296463145s" podCreationTimestamp="2026-02-19 00:16:00 +0000 UTC" firstStartedPulling="2026-02-19 00:16:00.689017643 +0000 UTC m=+390.525257672" lastFinishedPulling="2026-02-19 00:16:01.918730565 +0000 UTC m=+391.754970624" observedRunningTime="2026-02-19 00:16:02.296022461 +0000 UTC m=+392.132262450" watchObservedRunningTime="2026-02-19 00:16:02.296463145 +0000 UTC m=+392.132703124" Feb 19 00:16:03 crc kubenswrapper[5109]: I0219 00:16:03.290215 5109 generic.go:358] "Generic (PLEG): container finished" podID="095c765b-bd19-495f-a5d2-60abe52b0ee8" containerID="8e4b01fcf0a2c53a5946ca2505a369101f0565dcf6a7855a5cc18721e85a4e47" exitCode=0 Feb 19 00:16:03 crc kubenswrapper[5109]: I0219 00:16:03.290291 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" event={"ID":"095c765b-bd19-495f-a5d2-60abe52b0ee8","Type":"ContainerDied","Data":"8e4b01fcf0a2c53a5946ca2505a369101f0565dcf6a7855a5cc18721e85a4e47"} Feb 19 00:16:04 crc kubenswrapper[5109]: I0219 00:16:04.571817 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:04 crc kubenswrapper[5109]: I0219 00:16:04.683838 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx9vq\" (UniqueName: \"kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq\") pod \"095c765b-bd19-495f-a5d2-60abe52b0ee8\" (UID: \"095c765b-bd19-495f-a5d2-60abe52b0ee8\") " Feb 19 00:16:04 crc kubenswrapper[5109]: I0219 00:16:04.692942 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq" (OuterVolumeSpecName: "kube-api-access-dx9vq") pod "095c765b-bd19-495f-a5d2-60abe52b0ee8" (UID: "095c765b-bd19-495f-a5d2-60abe52b0ee8"). InnerVolumeSpecName "kube-api-access-dx9vq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:16:04 crc kubenswrapper[5109]: I0219 00:16:04.785845 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dx9vq\" (UniqueName: \"kubernetes.io/projected/095c765b-bd19-495f-a5d2-60abe52b0ee8-kube-api-access-dx9vq\") on node \"crc\" DevicePath \"\"" Feb 19 00:16:05 crc kubenswrapper[5109]: I0219 00:16:05.309155 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" event={"ID":"095c765b-bd19-495f-a5d2-60abe52b0ee8","Type":"ContainerDied","Data":"eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606"} Feb 19 00:16:05 crc kubenswrapper[5109]: I0219 00:16:05.309191 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524336-tg2rm" Feb 19 00:16:05 crc kubenswrapper[5109]: I0219 00:16:05.309210 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eab6a5ceb50ca22312e4633c62cb67b37e2b2310818c702149d8be9a8a1f6606" Feb 19 00:16:18 crc kubenswrapper[5109]: I0219 00:16:18.290192 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:18 crc kubenswrapper[5109]: I0219 00:16:18.291008 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:16:48 crc kubenswrapper[5109]: I0219 00:16:48.290132 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:16:48 crc kubenswrapper[5109]: I0219 00:16:48.290735 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.289927 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.290602 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.290724 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.291766 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.291878 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b" gracePeriod=600 Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.817522 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b" exitCode=0 Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.817628 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b"} Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.818119 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba"} Feb 19 00:17:18 crc kubenswrapper[5109]: I0219 00:17:18.818152 5109 scope.go:117] "RemoveContainer" containerID="42f92fd42b62dd83256fd5c9479224a96b38837d7cf60fd551ce59852493df3c" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.147014 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524338-l7qgq"] Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.148895 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="095c765b-bd19-495f-a5d2-60abe52b0ee8" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.148923 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="095c765b-bd19-495f-a5d2-60abe52b0ee8" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.149203 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="095c765b-bd19-495f-a5d2-60abe52b0ee8" containerName="oc" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.306785 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.314261 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.314270 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.330751 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-l7qgq"] Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.404546 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjmr\" (UniqueName: \"kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr\") pod \"auto-csr-approver-29524338-l7qgq\" (UID: \"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3\") " pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.506884 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjmr\" (UniqueName: \"kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr\") pod \"auto-csr-approver-29524338-l7qgq\" (UID: \"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3\") " pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.535387 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjmr\" (UniqueName: \"kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr\") pod \"auto-csr-approver-29524338-l7qgq\" (UID: \"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3\") " pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.642194 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:00 crc kubenswrapper[5109]: I0219 00:18:00.913129 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-l7qgq"] Feb 19 00:18:01 crc kubenswrapper[5109]: I0219 00:18:01.318752 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" event={"ID":"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3","Type":"ContainerStarted","Data":"b1090e87ceda2e4bc266e0393f56e56b3ea782c8988e72e015f904b7e26f797b"} Feb 19 00:18:02 crc kubenswrapper[5109]: I0219 00:18:02.328088 5109 generic.go:358] "Generic (PLEG): container finished" podID="1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" containerID="0f9aaf70b6930c00f373e57b1be813dee1fd510a4ef2c906ecd0965c2a58bbfe" exitCode=0 Feb 19 00:18:02 crc kubenswrapper[5109]: I0219 00:18:02.328163 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" event={"ID":"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3","Type":"ContainerDied","Data":"0f9aaf70b6930c00f373e57b1be813dee1fd510a4ef2c906ecd0965c2a58bbfe"} Feb 19 00:18:03 crc kubenswrapper[5109]: I0219 00:18:03.616365 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:03 crc kubenswrapper[5109]: I0219 00:18:03.643506 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsjmr\" (UniqueName: \"kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr\") pod \"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3\" (UID: \"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3\") " Feb 19 00:18:03 crc kubenswrapper[5109]: I0219 00:18:03.651312 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr" (OuterVolumeSpecName: "kube-api-access-vsjmr") pod "1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" (UID: "1043a162-8b5d-4bbb-a40a-0a0b1ee213d3"). InnerVolumeSpecName "kube-api-access-vsjmr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:18:03 crc kubenswrapper[5109]: I0219 00:18:03.745046 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vsjmr\" (UniqueName: \"kubernetes.io/projected/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3-kube-api-access-vsjmr\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:04 crc kubenswrapper[5109]: I0219 00:18:04.345476 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" event={"ID":"1043a162-8b5d-4bbb-a40a-0a0b1ee213d3","Type":"ContainerDied","Data":"b1090e87ceda2e4bc266e0393f56e56b3ea782c8988e72e015f904b7e26f797b"} Feb 19 00:18:04 crc kubenswrapper[5109]: I0219 00:18:04.345529 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1090e87ceda2e4bc266e0393f56e56b3ea782c8988e72e015f904b7e26f797b" Feb 19 00:18:04 crc kubenswrapper[5109]: I0219 00:18:04.345502 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524338-l7qgq" Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.744836 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94"] Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.745487 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="kube-rbac-proxy" containerID="cri-o://e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.745554 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="ovnkube-cluster-manager" containerID="cri-o://73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.920727 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfm9"] Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921366 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-controller" containerID="cri-o://c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921566 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="sbdb" containerID="cri-o://4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921624 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="nbdb" containerID="cri-o://0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921697 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-acl-logging" containerID="cri-o://cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921678 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-node" containerID="cri-o://6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921737 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="northd" containerID="cri-o://9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.921797 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" gracePeriod=30 Feb 19 00:18:57 crc kubenswrapper[5109]: I0219 00:18:57.944172 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovnkube-controller" containerID="cri-o://600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" gracePeriod=30 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.025272 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.048871 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049348 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" containerName="oc" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049360 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" containerName="oc" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049380 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="ovnkube-cluster-manager" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049385 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="ovnkube-cluster-manager" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049393 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="kube-rbac-proxy" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049399 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="kube-rbac-proxy" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049505 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="kube-rbac-proxy" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049517 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerName="ovnkube-cluster-manager" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.049525 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" containerName="oc" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.056476 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095109 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides\") pod \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095256 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert\") pod \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095330 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gc7q\" (UniqueName: \"kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q\") pod \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095356 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config\") pod \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\" (UID: \"5a1c588b-414d-4d41-94a6-b74745ffd8c9\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095640 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz872\" (UniqueName: \"kubernetes.io/projected/b622e341-4558-4516-9156-d7c83f36eee1-kube-api-access-kz872\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095719 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095736 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b622e341-4558-4516-9156-d7c83f36eee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095772 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5a1c588b-414d-4d41-94a6-b74745ffd8c9" (UID: "5a1c588b-414d-4d41-94a6-b74745ffd8c9"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095772 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5a1c588b-414d-4d41-94a6-b74745ffd8c9" (UID: "5a1c588b-414d-4d41-94a6-b74745ffd8c9"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095819 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095888 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.095899 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.101928 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "5a1c588b-414d-4d41-94a6-b74745ffd8c9" (UID: "5a1c588b-414d-4d41-94a6-b74745ffd8c9"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.102005 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q" (OuterVolumeSpecName: "kube-api-access-5gc7q") pod "5a1c588b-414d-4d41-94a6-b74745ffd8c9" (UID: "5a1c588b-414d-4d41-94a6-b74745ffd8c9"). InnerVolumeSpecName "kube-api-access-5gc7q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.187020 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfm9_2955042f-e905-4bd8-893a-97e7c9723fca/ovn-acl-logging/0.log" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.187439 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfm9_2955042f-e905-4bd8-893a-97e7c9723fca/ovn-controller/0.log" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.187881 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196516 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kz872\" (UniqueName: \"kubernetes.io/projected/b622e341-4558-4516-9156-d7c83f36eee1-kube-api-access-kz872\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196556 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196576 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b622e341-4558-4516-9156-d7c83f36eee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196611 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196673 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5a1c588b-414d-4d41-94a6-b74745ffd8c9-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.196684 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5gc7q\" (UniqueName: \"kubernetes.io/projected/5a1c588b-414d-4d41-94a6-b74745ffd8c9-kube-api-access-5gc7q\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.197487 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.197781 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b622e341-4558-4516-9156-d7c83f36eee1-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.202559 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b622e341-4558-4516-9156-d7c83f36eee1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.214961 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz872\" (UniqueName: \"kubernetes.io/projected/b622e341-4558-4516-9156-d7c83f36eee1-kube-api-access-kz872\") pod \"ovnkube-control-plane-97c9b6c48-2v2zf\" (UID: \"b622e341-4558-4516-9156-d7c83f36eee1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238023 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w55wb"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238622 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kubecfg-setup" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238682 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kubecfg-setup" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238695 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238703 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238714 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="nbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.238721 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="nbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239138 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239157 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239165 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="northd" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239171 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="northd" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239188 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-acl-logging" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239197 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-acl-logging" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239210 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-node" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239216 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-node" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239229 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="sbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239236 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="sbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239246 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovnkube-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239253 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovnkube-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239375 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovnkube-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239386 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="sbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239398 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-controller" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239407 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="nbdb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239419 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="ovn-acl-logging" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239429 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="northd" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239437 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-node" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.239448 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" containerName="kube-rbac-proxy-ovn-metrics" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.246738 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297118 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj2g9\" (UniqueName: \"kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297166 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297190 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297381 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297417 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297446 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297470 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297502 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297490 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297536 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297539 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash" (OuterVolumeSpecName: "host-slash") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297572 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297588 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297596 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297570 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297562 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297623 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297677 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297704 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297706 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297749 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297801 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297845 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297867 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297891 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297917 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297925 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log" (OuterVolumeSpecName: "node-log") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297958 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297953 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket" (OuterVolumeSpecName: "log-socket") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297961 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297987 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.297989 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298021 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd\") pod \"2955042f-e905-4bd8-893a-97e7c9723fca\" (UID: \"2955042f-e905-4bd8-893a-97e7c9723fca\") " Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298099 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298240 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-ovn\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298309 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-slash\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298318 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298343 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-node-log\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298378 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-kubelet\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298402 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298441 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-env-overrides\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298446 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298463 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-netd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298530 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298549 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298581 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-netns\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298606 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-var-lib-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298650 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovn-node-metrics-cert\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298716 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298753 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-script-lib\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298956 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wxnc\" (UniqueName: \"kubernetes.io/projected/6aff3eda-caf3-4a12-8265-19f4c5f79717-kube-api-access-9wxnc\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.298980 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-systemd-units\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299012 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-systemd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299030 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-config\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299124 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-etc-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299190 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-log-socket\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299261 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-bin\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299377 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299394 5109 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299406 5109 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299419 5109 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-log-socket\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299432 5109 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-node-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299443 5109 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299454 5109 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299464 5109 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299475 5109 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299487 5109 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299495 5109 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-slash\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299502 5109 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299519 5109 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299527 5109 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299535 5109 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299545 5109 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.299554 5109 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2955042f-e905-4bd8-893a-97e7c9723fca-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.301541 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9" (OuterVolumeSpecName: "kube-api-access-kj2g9") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "kube-api-access-kj2g9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.306167 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.309414 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2955042f-e905-4bd8-893a-97e7c9723fca" (UID: "2955042f-e905-4bd8-893a-97e7c9723fca"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.399823 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400022 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovn-node-metrics-cert\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400047 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400063 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-script-lib\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400100 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9wxnc\" (UniqueName: \"kubernetes.io/projected/6aff3eda-caf3-4a12-8265-19f4c5f79717-kube-api-access-9wxnc\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400115 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-systemd-units\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400163 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-systemd-units\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400204 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400277 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-systemd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400313 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-config\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400389 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-etc-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400411 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-log-socket\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400441 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-systemd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400462 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-bin\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400487 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-bin\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400511 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-ovn\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400548 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-slash\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400578 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-node-log\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400581 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-etc-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400617 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-kubelet\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400690 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400745 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-log-socket\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400799 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-run-ovn\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400859 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-slash\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400916 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-script-lib\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400914 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-node-log\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400746 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-env-overrides\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400983 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400986 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-netd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401040 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401060 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-netns\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401070 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovnkube-config\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401080 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-var-lib-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.400956 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-kubelet\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401011 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-cni-netd\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401106 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401144 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-host-run-netns\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401157 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6aff3eda-caf3-4a12-8265-19f4c5f79717-var-lib-openvswitch\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401232 5109 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2955042f-e905-4bd8-893a-97e7c9723fca-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401273 5109 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2955042f-e905-4bd8-893a-97e7c9723fca-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401308 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kj2g9\" (UniqueName: \"kubernetes.io/projected/2955042f-e905-4bd8-893a-97e7c9723fca-kube-api-access-kj2g9\") on node \"crc\" DevicePath \"\"" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.401391 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6aff3eda-caf3-4a12-8265-19f4c5f79717-env-overrides\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.403393 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6aff3eda-caf3-4a12-8265-19f4c5f79717-ovn-node-metrics-cert\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.429981 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wxnc\" (UniqueName: \"kubernetes.io/projected/6aff3eda-caf3-4a12-8265-19f4c5f79717-kube-api-access-9wxnc\") pod \"ovnkube-node-w55wb\" (UID: \"6aff3eda-caf3-4a12-8265-19f4c5f79717\") " pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.557597 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:18:58 crc kubenswrapper[5109]: W0219 00:18:58.574163 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aff3eda_caf3_4a12_8265_19f4c5f79717.slice/crio-7c4f032538bb7546a162f76d74f292ff0db8cc6f87511d76f00ec1174ffa1281 WatchSource:0}: Error finding container 7c4f032538bb7546a162f76d74f292ff0db8cc6f87511d76f00ec1174ffa1281: Status 404 returned error can't find the container with id 7c4f032538bb7546a162f76d74f292ff0db8cc6f87511d76f00ec1174ffa1281 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729424 5109 generic.go:358] "Generic (PLEG): container finished" podID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerID="73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729461 5109 generic.go:358] "Generic (PLEG): container finished" podID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" containerID="e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729513 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerDied","Data":"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729533 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729554 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerDied","Data":"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729583 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94" event={"ID":"5a1c588b-414d-4d41-94a6-b74745ffd8c9","Type":"ContainerDied","Data":"1ec5a4dd74b6f09d1465fa4a18e0a36b9172edc2820bed79e1b65b26efe9c091"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.729611 5109 scope.go:117] "RemoveContainer" containerID="73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.734932 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfm9_2955042f-e905-4bd8-893a-97e7c9723fca/ovn-acl-logging/0.log" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735518 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-bgfm9_2955042f-e905-4bd8-893a-97e7c9723fca/ovn-controller/0.log" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735876 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735896 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735903 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735910 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735916 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735923 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735930 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" exitCode=143 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735936 5109 generic.go:358] "Generic (PLEG): container finished" podID="2955042f-e905-4bd8-893a-97e7c9723fca" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" exitCode=143 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735952 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735982 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.735994 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736005 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736016 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736026 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736036 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736045 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736050 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736055 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736060 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736065 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736071 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736078 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736084 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736092 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736099 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736105 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736110 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736115 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736120 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736125 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736130 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736134 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736139 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736146 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736067 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736153 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736503 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736510 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736515 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736520 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736525 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736531 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736535 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736540 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736550 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bgfm9" event={"ID":"2955042f-e905-4bd8-893a-97e7c9723fca","Type":"ContainerDied","Data":"6bb581bc8ecefe4984214d4b56f5c6b8603839085b55a0e81dc2e4cac8eb01a5"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736561 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736567 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736572 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736576 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736581 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736586 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736590 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736595 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.736599 5109 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.738137 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" event={"ID":"b622e341-4558-4516-9156-d7c83f36eee1","Type":"ContainerStarted","Data":"42322bb2565f783321b8b54ff44b6159073ba5e3b42f5040d1b5cb5187cc3fff"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.738161 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" event={"ID":"b622e341-4558-4516-9156-d7c83f36eee1","Type":"ContainerStarted","Data":"3ed89b4cbe237c51a788321b8a59a7a27a12bb4a56d213501b696032f4bb3770"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.739568 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.739596 5109 generic.go:358] "Generic (PLEG): container finished" podID="9d3c36ec-d151-4cb3-8bcb-931c2665a1e7" containerID="c36d18549c89f325a547d5d1938e591a3549ad096def50af8829a9adee3ac740" exitCode=2 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.739660 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ctz69" event={"ID":"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7","Type":"ContainerDied","Data":"c36d18549c89f325a547d5d1938e591a3549ad096def50af8829a9adee3ac740"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.741450 5109 generic.go:358] "Generic (PLEG): container finished" podID="6aff3eda-caf3-4a12-8265-19f4c5f79717" containerID="0fc41f1bfd64db2a3bf249a6aa2b7c7a546aa295e15b02199997f35160a58d39" exitCode=0 Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.741526 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerDied","Data":"0fc41f1bfd64db2a3bf249a6aa2b7c7a546aa295e15b02199997f35160a58d39"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.741578 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"7c4f032538bb7546a162f76d74f292ff0db8cc6f87511d76f00ec1174ffa1281"} Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.742003 5109 scope.go:117] "RemoveContainer" containerID="c36d18549c89f325a547d5d1938e591a3549ad096def50af8829a9adee3ac740" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.757508 5109 scope.go:117] "RemoveContainer" containerID="e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.775022 5109 scope.go:117] "RemoveContainer" containerID="73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.775449 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d\": container with ID starting with 73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d not found: ID does not exist" containerID="73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.775486 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d"} err="failed to get container status \"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d\": rpc error: code = NotFound desc = could not find container \"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d\": container with ID starting with 73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.775507 5109 scope.go:117] "RemoveContainer" containerID="e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.775723 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7\": container with ID starting with e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7 not found: ID does not exist" containerID="e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.775757 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7"} err="failed to get container status \"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7\": rpc error: code = NotFound desc = could not find container \"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7\": container with ID starting with e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7 not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.775775 5109 scope.go:117] "RemoveContainer" containerID="73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.776153 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d"} err="failed to get container status \"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d\": rpc error: code = NotFound desc = could not find container \"73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d\": container with ID starting with 73bb90adc7bd712e9e55138f2c11a46346bf67d5b4f8348502a4f7aebda7757d not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.776175 5109 scope.go:117] "RemoveContainer" containerID="e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.776338 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7"} err="failed to get container status \"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7\": rpc error: code = NotFound desc = could not find container \"e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7\": container with ID starting with e07b49005e79c8def88a4712f8c0ac07324e69942041072277fc6aedc1e5b2e7 not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.776359 5109 scope.go:117] "RemoveContainer" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.803363 5109 scope.go:117] "RemoveContainer" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.808260 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.817614 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-9cp94"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.821325 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfm9"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.824963 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bgfm9"] Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.832132 5109 scope.go:117] "RemoveContainer" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.852063 5109 scope.go:117] "RemoveContainer" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.874802 5109 scope.go:117] "RemoveContainer" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.920146 5109 scope.go:117] "RemoveContainer" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.936936 5109 scope.go:117] "RemoveContainer" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.952258 5109 scope.go:117] "RemoveContainer" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.985336 5109 scope.go:117] "RemoveContainer" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.998264 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2955042f-e905-4bd8-893a-97e7c9723fca" path="/var/lib/kubelet/pods/2955042f-e905-4bd8-893a-97e7c9723fca/volumes" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.998609 5109 scope.go:117] "RemoveContainer" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.998897 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": container with ID starting with 600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6 not found: ID does not exist" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.998926 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} err="failed to get container status \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": rpc error: code = NotFound desc = could not find container \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": container with ID starting with 600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6 not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.998947 5109 scope.go:117] "RemoveContainer" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.999247 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": container with ID starting with 4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69 not found: ID does not exist" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.999304 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a1c588b-414d-4d41-94a6-b74745ffd8c9" path="/var/lib/kubelet/pods/5a1c588b-414d-4d41-94a6-b74745ffd8c9/volumes" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.999296 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} err="failed to get container status \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": rpc error: code = NotFound desc = could not find container \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": container with ID starting with 4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69 not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.999403 5109 scope.go:117] "RemoveContainer" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.999696 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": container with ID starting with 0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b not found: ID does not exist" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.999717 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} err="failed to get container status \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": rpc error: code = NotFound desc = could not find container \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": container with ID starting with 0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b not found: ID does not exist" Feb 19 00:18:58 crc kubenswrapper[5109]: I0219 00:18:58.999730 5109 scope.go:117] "RemoveContainer" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:58 crc kubenswrapper[5109]: E0219 00:18:58.999945 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": container with ID starting with 9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e not found: ID does not exist" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:58.999983 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} err="failed to get container status \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": rpc error: code = NotFound desc = could not find container \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": container with ID starting with 9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000006 5109 scope.go:117] "RemoveContainer" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:59 crc kubenswrapper[5109]: E0219 00:18:59.000212 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": container with ID starting with 2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af not found: ID does not exist" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000236 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} err="failed to get container status \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": rpc error: code = NotFound desc = could not find container \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": container with ID starting with 2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000249 5109 scope.go:117] "RemoveContainer" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:59 crc kubenswrapper[5109]: E0219 00:18:59.000471 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": container with ID starting with 6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80 not found: ID does not exist" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000493 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} err="failed to get container status \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": rpc error: code = NotFound desc = could not find container \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": container with ID starting with 6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000506 5109 scope.go:117] "RemoveContainer" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:59 crc kubenswrapper[5109]: E0219 00:18:59.000876 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": container with ID starting with cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264 not found: ID does not exist" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000905 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} err="failed to get container status \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": rpc error: code = NotFound desc = could not find container \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": container with ID starting with cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.000917 5109 scope.go:117] "RemoveContainer" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:59 crc kubenswrapper[5109]: E0219 00:18:59.001133 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": container with ID starting with c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599 not found: ID does not exist" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001157 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} err="failed to get container status \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": rpc error: code = NotFound desc = could not find container \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": container with ID starting with c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001172 5109 scope.go:117] "RemoveContainer" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:59 crc kubenswrapper[5109]: E0219 00:18:59.001547 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": container with ID starting with 27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8 not found: ID does not exist" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001570 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} err="failed to get container status \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": rpc error: code = NotFound desc = could not find container \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": container with ID starting with 27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001584 5109 scope.go:117] "RemoveContainer" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001789 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} err="failed to get container status \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": rpc error: code = NotFound desc = could not find container \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": container with ID starting with 600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001805 5109 scope.go:117] "RemoveContainer" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.001993 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} err="failed to get container status \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": rpc error: code = NotFound desc = could not find container \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": container with ID starting with 4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002013 5109 scope.go:117] "RemoveContainer" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002199 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} err="failed to get container status \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": rpc error: code = NotFound desc = could not find container \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": container with ID starting with 0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002220 5109 scope.go:117] "RemoveContainer" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002387 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} err="failed to get container status \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": rpc error: code = NotFound desc = could not find container \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": container with ID starting with 9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002404 5109 scope.go:117] "RemoveContainer" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002587 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} err="failed to get container status \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": rpc error: code = NotFound desc = could not find container \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": container with ID starting with 2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002606 5109 scope.go:117] "RemoveContainer" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002948 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} err="failed to get container status \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": rpc error: code = NotFound desc = could not find container \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": container with ID starting with 6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.002974 5109 scope.go:117] "RemoveContainer" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003186 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} err="failed to get container status \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": rpc error: code = NotFound desc = could not find container \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": container with ID starting with cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003220 5109 scope.go:117] "RemoveContainer" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003414 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} err="failed to get container status \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": rpc error: code = NotFound desc = could not find container \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": container with ID starting with c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003435 5109 scope.go:117] "RemoveContainer" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003618 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} err="failed to get container status \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": rpc error: code = NotFound desc = could not find container \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": container with ID starting with 27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003656 5109 scope.go:117] "RemoveContainer" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003854 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} err="failed to get container status \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": rpc error: code = NotFound desc = could not find container \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": container with ID starting with 600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.003882 5109 scope.go:117] "RemoveContainer" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004097 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} err="failed to get container status \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": rpc error: code = NotFound desc = could not find container \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": container with ID starting with 4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004115 5109 scope.go:117] "RemoveContainer" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004309 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} err="failed to get container status \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": rpc error: code = NotFound desc = could not find container \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": container with ID starting with 0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004328 5109 scope.go:117] "RemoveContainer" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004509 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} err="failed to get container status \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": rpc error: code = NotFound desc = could not find container \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": container with ID starting with 9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004526 5109 scope.go:117] "RemoveContainer" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004740 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} err="failed to get container status \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": rpc error: code = NotFound desc = could not find container \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": container with ID starting with 2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004763 5109 scope.go:117] "RemoveContainer" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004954 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} err="failed to get container status \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": rpc error: code = NotFound desc = could not find container \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": container with ID starting with 6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.004973 5109 scope.go:117] "RemoveContainer" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005170 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} err="failed to get container status \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": rpc error: code = NotFound desc = could not find container \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": container with ID starting with cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005190 5109 scope.go:117] "RemoveContainer" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005383 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} err="failed to get container status \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": rpc error: code = NotFound desc = could not find container \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": container with ID starting with c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005407 5109 scope.go:117] "RemoveContainer" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005592 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} err="failed to get container status \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": rpc error: code = NotFound desc = could not find container \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": container with ID starting with 27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005611 5109 scope.go:117] "RemoveContainer" containerID="600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005822 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6"} err="failed to get container status \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": rpc error: code = NotFound desc = could not find container \"600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6\": container with ID starting with 600d8d4216334e94c9d791c1628d2863b986266dcd0066c677ebb605dde43bf6 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.005846 5109 scope.go:117] "RemoveContainer" containerID="4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006054 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69"} err="failed to get container status \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": rpc error: code = NotFound desc = could not find container \"4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69\": container with ID starting with 4596a6b73031a4bce4246631cc52591471f20591fad7aace57884f29e1ae3e69 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006072 5109 scope.go:117] "RemoveContainer" containerID="0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006235 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b"} err="failed to get container status \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": rpc error: code = NotFound desc = could not find container \"0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b\": container with ID starting with 0f850eb43b6fe1afa8ba0233193457dec63fd7ce705d398e530bf17a2e6e1c6b not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006252 5109 scope.go:117] "RemoveContainer" containerID="9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006429 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e"} err="failed to get container status \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": rpc error: code = NotFound desc = could not find container \"9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e\": container with ID starting with 9afdead00fe9c6c4ab9de08387974ba50815e538d5318efa56df1eb5b628d91e not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006445 5109 scope.go:117] "RemoveContainer" containerID="2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006667 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af"} err="failed to get container status \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": rpc error: code = NotFound desc = could not find container \"2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af\": container with ID starting with 2c79d21ef8f5e794c8363af8adac3bba43b1cf2074799834b97d95696c2bb3af not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006687 5109 scope.go:117] "RemoveContainer" containerID="6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006870 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80"} err="failed to get container status \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": rpc error: code = NotFound desc = could not find container \"6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80\": container with ID starting with 6c73ae7c96c109c8b1f2ada88080359d9ae5873915bb815f4dcb66c8323d2c80 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.006890 5109 scope.go:117] "RemoveContainer" containerID="cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.007158 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264"} err="failed to get container status \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": rpc error: code = NotFound desc = could not find container \"cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264\": container with ID starting with cad948fa2a79d9cd34ff605510839352b46721b360e9bbbf3949a41060b77264 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.007177 5109 scope.go:117] "RemoveContainer" containerID="c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.007380 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599"} err="failed to get container status \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": rpc error: code = NotFound desc = could not find container \"c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599\": container with ID starting with c6bb50e1e926202b514a03d0deb643437d45a912bc4e81bac7021d95530ad599 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.007396 5109 scope.go:117] "RemoveContainer" containerID="27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.007568 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8"} err="failed to get container status \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": rpc error: code = NotFound desc = could not find container \"27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8\": container with ID starting with 27f7f75b9e5d8fe5f9a78bf94e2aab33b56bcd5e0945323f21b2ef4cdb609cd8 not found: ID does not exist" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.754695 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" event={"ID":"b622e341-4558-4516-9156-d7c83f36eee1","Type":"ContainerStarted","Data":"f816a998141638ca18b61545bf3019a892812cbcfabd6b6822a96b4ad7b63567"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.758104 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.758296 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-ctz69" event={"ID":"9d3c36ec-d151-4cb3-8bcb-931c2665a1e7","Type":"ContainerStarted","Data":"cf3cdf8f9eeee105b95630f6f9ccbbe2acf3929eca2fc648f14e8021cdcccafa"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765344 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"017868a4d91984c261b8c4805757cb4dfb3c72fb348e98d493d3555463564320"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765395 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"eb0af8697544edf1b44fdc12222c76ffa1195a1960ffd6295b0d9b2177c74efd"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765413 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"ad144199acd44d5d86e0dab6d4f775a2f984d2ad38874e2274a53812b4328932"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765429 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"681d471125152080290a9acffbc1bc3ddd38c3b778f757da6ded866c1f0c5f30"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765443 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"a4ab404d2ee6b0312f93ed068a1a3bbc92ee959f661b8ee3e331009aa452b3ce"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.765457 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"620132ddaf7fe4894d2e9da2076f5f9df8f73e5182fe4394ea2b66168f915f2a"} Feb 19 00:18:59 crc kubenswrapper[5109]: I0219 00:18:59.786801 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-2v2zf" podStartSLOduration=2.786768283 podStartE2EDuration="2.786768283s" podCreationTimestamp="2026-02-19 00:18:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:18:59.783368348 +0000 UTC m=+569.619608347" watchObservedRunningTime="2026-02-19 00:18:59.786768283 +0000 UTC m=+569.623008322" Feb 19 00:19:01 crc kubenswrapper[5109]: I0219 00:19:01.783229 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"5ca786721b709890b2b32250305eafe326aab62fb080519d269b805e02dce0b8"} Feb 19 00:19:04 crc kubenswrapper[5109]: I0219 00:19:04.807503 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" event={"ID":"6aff3eda-caf3-4a12-8265-19f4c5f79717","Type":"ContainerStarted","Data":"80886ae0d7d147fc7e9f14668fe108a0f178780c9e5975e5802d713408a889a4"} Feb 19 00:19:04 crc kubenswrapper[5109]: I0219 00:19:04.808695 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:04 crc kubenswrapper[5109]: I0219 00:19:04.844623 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" podStartSLOduration=6.8446073720000005 podStartE2EDuration="6.844607372s" podCreationTimestamp="2026-02-19 00:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:19:04.839733875 +0000 UTC m=+574.675973894" watchObservedRunningTime="2026-02-19 00:19:04.844607372 +0000 UTC m=+574.680847351" Feb 19 00:19:04 crc kubenswrapper[5109]: I0219 00:19:04.847939 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:05 crc kubenswrapper[5109]: I0219 00:19:05.813894 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:05 crc kubenswrapper[5109]: I0219 00:19:05.813939 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:05 crc kubenswrapper[5109]: I0219 00:19:05.847007 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:18 crc kubenswrapper[5109]: I0219 00:19:18.289614 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:18 crc kubenswrapper[5109]: I0219 00:19:18.290437 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:19:31 crc kubenswrapper[5109]: I0219 00:19:31.259047 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:19:31 crc kubenswrapper[5109]: I0219 00:19:31.265568 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:19:31 crc kubenswrapper[5109]: I0219 00:19:31.268039 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:19:31 crc kubenswrapper[5109]: I0219 00:19:31.273767 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:19:37 crc kubenswrapper[5109]: I0219 00:19:37.863562 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w55wb" Feb 19 00:19:42 crc kubenswrapper[5109]: I0219 00:19:42.504179 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:19:42 crc kubenswrapper[5109]: I0219 00:19:42.506594 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hc754" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="registry-server" containerID="cri-o://2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe" gracePeriod=30 Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.001709 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.053437 5109 generic.go:358] "Generic (PLEG): container finished" podID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerID="2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe" exitCode=0 Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.053487 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerDied","Data":"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe"} Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.053534 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hc754" event={"ID":"0fafd59c-5273-4f91-8772-cc3a3dd845fa","Type":"ContainerDied","Data":"4381ebaafe5eb3911ecf5259b93f1a829a29bc455aa70967dc13d0596834735b"} Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.053542 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hc754" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.053555 5109 scope.go:117] "RemoveContainer" containerID="2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.069338 5109 scope.go:117] "RemoveContainer" containerID="0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.085021 5109 scope.go:117] "RemoveContainer" containerID="a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.102361 5109 scope.go:117] "RemoveContainer" containerID="2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe" Feb 19 00:19:43 crc kubenswrapper[5109]: E0219 00:19:43.102848 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe\": container with ID starting with 2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe not found: ID does not exist" containerID="2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.102912 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe"} err="failed to get container status \"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe\": rpc error: code = NotFound desc = could not find container \"2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe\": container with ID starting with 2968914029afdc59a16467e26c7012f5603a8de8f48ca4608b9612289f6b3cfe not found: ID does not exist" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.102950 5109 scope.go:117] "RemoveContainer" containerID="0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d" Feb 19 00:19:43 crc kubenswrapper[5109]: E0219 00:19:43.103420 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d\": container with ID starting with 0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d not found: ID does not exist" containerID="0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.103470 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d"} err="failed to get container status \"0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d\": rpc error: code = NotFound desc = could not find container \"0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d\": container with ID starting with 0e65af9a9c83947beada70adcfa77974acc5be9b0d6c901ac3a8c6d2b5cb8c1d not found: ID does not exist" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.103508 5109 scope.go:117] "RemoveContainer" containerID="a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2" Feb 19 00:19:43 crc kubenswrapper[5109]: E0219 00:19:43.103928 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2\": container with ID starting with a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2 not found: ID does not exist" containerID="a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.103971 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2"} err="failed to get container status \"a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2\": rpc error: code = NotFound desc = could not find container \"a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2\": container with ID starting with a3f0f0b2f4dd35ceb3d4278cff3b70b7cf855304a826a796aa914b9ad04f85f2 not found: ID does not exist" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.132491 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content\") pod \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.132617 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st66c\" (UniqueName: \"kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c\") pod \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.132912 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities\") pod \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\" (UID: \"0fafd59c-5273-4f91-8772-cc3a3dd845fa\") " Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.134513 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities" (OuterVolumeSpecName: "utilities") pod "0fafd59c-5273-4f91-8772-cc3a3dd845fa" (UID: "0fafd59c-5273-4f91-8772-cc3a3dd845fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.135341 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.140955 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c" (OuterVolumeSpecName: "kube-api-access-st66c") pod "0fafd59c-5273-4f91-8772-cc3a3dd845fa" (UID: "0fafd59c-5273-4f91-8772-cc3a3dd845fa"). InnerVolumeSpecName "kube-api-access-st66c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.147942 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0fafd59c-5273-4f91-8772-cc3a3dd845fa" (UID: "0fafd59c-5273-4f91-8772-cc3a3dd845fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.237225 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-st66c\" (UniqueName: \"kubernetes.io/projected/0fafd59c-5273-4f91-8772-cc3a3dd845fa-kube-api-access-st66c\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.237270 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0fafd59c-5273-4f91-8772-cc3a3dd845fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.392160 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:19:43 crc kubenswrapper[5109]: I0219 00:19:43.396587 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hc754"] Feb 19 00:19:44 crc kubenswrapper[5109]: I0219 00:19:44.996983 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" path="/var/lib/kubelet/pods/0fafd59c-5273-4f91-8772-cc3a3dd845fa/volumes" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.900025 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6"] Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901471 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="extract-utilities" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901508 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="extract-utilities" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901530 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="extract-content" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901542 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="extract-content" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901561 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="registry-server" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901573 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="registry-server" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.901802 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="0fafd59c-5273-4f91-8772-cc3a3dd845fa" containerName="registry-server" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.914998 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6"] Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.915159 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.917718 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.974170 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.974352 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99x9l\" (UniqueName: \"kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:45 crc kubenswrapper[5109]: I0219 00:19:45.974671 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.076107 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.076227 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.076286 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-99x9l\" (UniqueName: \"kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.077024 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.077257 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.118297 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-99x9l\" (UniqueName: \"kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.232858 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:46 crc kubenswrapper[5109]: I0219 00:19:46.651145 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6"] Feb 19 00:19:47 crc kubenswrapper[5109]: I0219 00:19:47.084537 5109 generic.go:358] "Generic (PLEG): container finished" podID="74cea250-8141-48fe-91eb-54068d760685" containerID="ef34504fc5111ae0a62f14a2e0adc3c59c2c9243f088b5a5149e7900f160da68" exitCode=0 Feb 19 00:19:47 crc kubenswrapper[5109]: I0219 00:19:47.084590 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" event={"ID":"74cea250-8141-48fe-91eb-54068d760685","Type":"ContainerDied","Data":"ef34504fc5111ae0a62f14a2e0adc3c59c2c9243f088b5a5149e7900f160da68"} Feb 19 00:19:47 crc kubenswrapper[5109]: I0219 00:19:47.084665 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" event={"ID":"74cea250-8141-48fe-91eb-54068d760685","Type":"ContainerStarted","Data":"c4058012ad07031f32d3bc040b93f54d973b4e68cea7fe182bfea667a3da7017"} Feb 19 00:19:48 crc kubenswrapper[5109]: I0219 00:19:48.290310 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:19:48 crc kubenswrapper[5109]: I0219 00:19:48.290705 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:19:49 crc kubenswrapper[5109]: I0219 00:19:49.098313 5109 generic.go:358] "Generic (PLEG): container finished" podID="74cea250-8141-48fe-91eb-54068d760685" containerID="f95a41f2529b0f688b853d507411bb4bdf5ff26648c106a62f161f5c743bc379" exitCode=0 Feb 19 00:19:49 crc kubenswrapper[5109]: I0219 00:19:49.098662 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" event={"ID":"74cea250-8141-48fe-91eb-54068d760685","Type":"ContainerDied","Data":"f95a41f2529b0f688b853d507411bb4bdf5ff26648c106a62f161f5c743bc379"} Feb 19 00:19:50 crc kubenswrapper[5109]: I0219 00:19:50.111104 5109 generic.go:358] "Generic (PLEG): container finished" podID="74cea250-8141-48fe-91eb-54068d760685" containerID="3d15a614bfd150cbacf78c1d66ae07b9a3905146a9e0c9fce2e9084194737924" exitCode=0 Feb 19 00:19:50 crc kubenswrapper[5109]: I0219 00:19:50.111223 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" event={"ID":"74cea250-8141-48fe-91eb-54068d760685","Type":"ContainerDied","Data":"3d15a614bfd150cbacf78c1d66ae07b9a3905146a9e0c9fce2e9084194737924"} Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.473333 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.644254 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle\") pod \"74cea250-8141-48fe-91eb-54068d760685\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.644764 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99x9l\" (UniqueName: \"kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l\") pod \"74cea250-8141-48fe-91eb-54068d760685\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.644837 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util\") pod \"74cea250-8141-48fe-91eb-54068d760685\" (UID: \"74cea250-8141-48fe-91eb-54068d760685\") " Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.649149 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle" (OuterVolumeSpecName: "bundle") pod "74cea250-8141-48fe-91eb-54068d760685" (UID: "74cea250-8141-48fe-91eb-54068d760685"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.653724 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l" (OuterVolumeSpecName: "kube-api-access-99x9l") pod "74cea250-8141-48fe-91eb-54068d760685" (UID: "74cea250-8141-48fe-91eb-54068d760685"). InnerVolumeSpecName "kube-api-access-99x9l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.664370 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util" (OuterVolumeSpecName: "util") pod "74cea250-8141-48fe-91eb-54068d760685" (UID: "74cea250-8141-48fe-91eb-54068d760685"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.747165 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.747222 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99x9l\" (UniqueName: \"kubernetes.io/projected/74cea250-8141-48fe-91eb-54068d760685-kube-api-access-99x9l\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:51 crc kubenswrapper[5109]: I0219 00:19:51.747246 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74cea250-8141-48fe-91eb-54068d760685-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:52 crc kubenswrapper[5109]: I0219 00:19:52.128702 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" event={"ID":"74cea250-8141-48fe-91eb-54068d760685","Type":"ContainerDied","Data":"c4058012ad07031f32d3bc040b93f54d973b4e68cea7fe182bfea667a3da7017"} Feb 19 00:19:52 crc kubenswrapper[5109]: I0219 00:19:52.128736 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6" Feb 19 00:19:52 crc kubenswrapper[5109]: I0219 00:19:52.128761 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4058012ad07031f32d3bc040b93f54d973b4e68cea7fe182bfea667a3da7017" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.509587 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth"] Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510596 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="util" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510623 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="util" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510698 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="pull" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510715 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="pull" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510748 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="extract" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510767 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="extract" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.510986 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="74cea250-8141-48fe-91eb-54068d760685" containerName="extract" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.522205 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.524245 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth"] Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.526394 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.675187 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.675823 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.676114 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h479r\" (UniqueName: \"kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.778009 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.778124 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-h479r\" (UniqueName: \"kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.778305 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.779250 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.779341 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.812734 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-h479r\" (UniqueName: \"kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:53 crc kubenswrapper[5109]: I0219 00:19:53.845836 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:54 crc kubenswrapper[5109]: I0219 00:19:54.079234 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth"] Feb 19 00:19:54 crc kubenswrapper[5109]: W0219 00:19:54.085810 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47290e31_3e82_43f9_8568_c2a1d602f78c.slice/crio-1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46 WatchSource:0}: Error finding container 1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46: Status 404 returned error can't find the container with id 1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46 Feb 19 00:19:54 crc kubenswrapper[5109]: I0219 00:19:54.142995 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" event={"ID":"47290e31-3e82-43f9-8568-c2a1d602f78c","Type":"ContainerStarted","Data":"1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46"} Feb 19 00:19:55 crc kubenswrapper[5109]: I0219 00:19:55.153990 5109 generic.go:358] "Generic (PLEG): container finished" podID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerID="897b2fc14a7f6a8fce3466c9180839180d6629d20be440ddb29250daf9b1fa69" exitCode=0 Feb 19 00:19:55 crc kubenswrapper[5109]: I0219 00:19:55.154159 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" event={"ID":"47290e31-3e82-43f9-8568-c2a1d602f78c","Type":"ContainerDied","Data":"897b2fc14a7f6a8fce3466c9180839180d6629d20be440ddb29250daf9b1fa69"} Feb 19 00:19:56 crc kubenswrapper[5109]: I0219 00:19:56.171013 5109 generic.go:358] "Generic (PLEG): container finished" podID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerID="6398723dd33f7966c439622516ac4c1d81f1a5aa419743122ba6085a33946434" exitCode=0 Feb 19 00:19:56 crc kubenswrapper[5109]: I0219 00:19:56.171147 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" event={"ID":"47290e31-3e82-43f9-8568-c2a1d602f78c","Type":"ContainerDied","Data":"6398723dd33f7966c439622516ac4c1d81f1a5aa419743122ba6085a33946434"} Feb 19 00:19:57 crc kubenswrapper[5109]: I0219 00:19:57.178970 5109 generic.go:358] "Generic (PLEG): container finished" podID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerID="7638cad91a6b3be656bcd939eaf03a5b487ee9aea7b02e60988f87ba2735c40a" exitCode=0 Feb 19 00:19:57 crc kubenswrapper[5109]: I0219 00:19:57.179058 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" event={"ID":"47290e31-3e82-43f9-8568-c2a1d602f78c","Type":"ContainerDied","Data":"7638cad91a6b3be656bcd939eaf03a5b487ee9aea7b02e60988f87ba2735c40a"} Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.541796 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.645801 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle\") pod \"47290e31-3e82-43f9-8568-c2a1d602f78c\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.645924 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util\") pod \"47290e31-3e82-43f9-8568-c2a1d602f78c\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.645956 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h479r\" (UniqueName: \"kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r\") pod \"47290e31-3e82-43f9-8568-c2a1d602f78c\" (UID: \"47290e31-3e82-43f9-8568-c2a1d602f78c\") " Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.646819 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle" (OuterVolumeSpecName: "bundle") pod "47290e31-3e82-43f9-8568-c2a1d602f78c" (UID: "47290e31-3e82-43f9-8568-c2a1d602f78c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.657589 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r" (OuterVolumeSpecName: "kube-api-access-h479r") pod "47290e31-3e82-43f9-8568-c2a1d602f78c" (UID: "47290e31-3e82-43f9-8568-c2a1d602f78c"). InnerVolumeSpecName "kube-api-access-h479r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.665792 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util" (OuterVolumeSpecName: "util") pod "47290e31-3e82-43f9-8568-c2a1d602f78c" (UID: "47290e31-3e82-43f9-8568-c2a1d602f78c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.747401 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.747443 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h479r\" (UniqueName: \"kubernetes.io/projected/47290e31-3e82-43f9-8568-c2a1d602f78c-kube-api-access-h479r\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:58 crc kubenswrapper[5109]: I0219 00:19:58.747455 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/47290e31-3e82-43f9-8568-c2a1d602f78c-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.090708 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv"] Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091186 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="util" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091198 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="util" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091220 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="pull" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091226 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="pull" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091236 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="extract" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091242 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="extract" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.091325 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="47290e31-3e82-43f9-8568-c2a1d602f78c" containerName="extract" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.099732 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.106446 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv"] Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.152776 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.152831 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.152905 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45kzk\" (UniqueName: \"kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.190011 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" event={"ID":"47290e31-3e82-43f9-8568-c2a1d602f78c","Type":"ContainerDied","Data":"1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46"} Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.190043 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e02699d3182cd62fb8d33f261b96b75e96996a4b6f2fb3102ef44cbd1910b46" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.190050 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.254181 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.254379 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.254518 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-45kzk\" (UniqueName: \"kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.254811 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.254841 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.270623 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-45kzk\" (UniqueName: \"kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.411816 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.627461 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv"] Feb 19 00:19:59 crc kubenswrapper[5109]: W0219 00:19:59.629463 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4d279e6_ab61_4657_a567_b007a7d707f9.slice/crio-71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec WatchSource:0}: Error finding container 71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec: Status 404 returned error can't find the container with id 71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec Feb 19 00:19:59 crc kubenswrapper[5109]: I0219 00:19:59.631780 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.122819 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524340-dwzkg"] Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.196423 5109 generic.go:358] "Generic (PLEG): container finished" podID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerID="6fc58ee8d88dcdd34d8b9a3860e4a63fab9c7958cf1f5d531ba0dae30f67dffe" exitCode=0 Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.200925 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-dwzkg"] Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.200956 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" event={"ID":"b4d279e6-ab61-4657-a567-b007a7d707f9","Type":"ContainerDied","Data":"6fc58ee8d88dcdd34d8b9a3860e4a63fab9c7958cf1f5d531ba0dae30f67dffe"} Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.201007 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" event={"ID":"b4d279e6-ab61-4657-a567-b007a7d707f9","Type":"ContainerStarted","Data":"71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec"} Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.201093 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.206524 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.206709 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.206805 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.402012 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8djxh\" (UniqueName: \"kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh\") pod \"auto-csr-approver-29524340-dwzkg\" (UID: \"45cbaf31-5202-4d06-8328-9699984a859b\") " pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.502597 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8djxh\" (UniqueName: \"kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh\") pod \"auto-csr-approver-29524340-dwzkg\" (UID: \"45cbaf31-5202-4d06-8328-9699984a859b\") " pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.534031 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8djxh\" (UniqueName: \"kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh\") pod \"auto-csr-approver-29524340-dwzkg\" (UID: \"45cbaf31-5202-4d06-8328-9699984a859b\") " pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:00 crc kubenswrapper[5109]: I0219 00:20:00.818769 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:01 crc kubenswrapper[5109]: I0219 00:20:01.217751 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-dwzkg"] Feb 19 00:20:02 crc kubenswrapper[5109]: I0219 00:20:02.216911 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" event={"ID":"45cbaf31-5202-4d06-8328-9699984a859b","Type":"ContainerStarted","Data":"6569d4ee9e56de57d3acce85b87b1447ec23a2eb29a7d664fb14eb36764e60ea"} Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.750135 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.755204 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.762155 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.763085 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-2hs8s\"" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.763262 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.769110 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.840804 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58gkq\" (UniqueName: \"kubernetes.io/projected/a91dafae-307e-4ee3-965f-1534328cf242-kube-api-access-58gkq\") pod \"obo-prometheus-operator-9bc85b4bf-7dwk9\" (UID: \"a91dafae-307e-4ee3-965f-1534328cf242\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.875761 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.879279 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.881503 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-hkmnj\"" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.881867 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.885072 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.889195 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.892597 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.916927 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8"] Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.942461 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.942526 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.942554 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.942591 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58gkq\" (UniqueName: \"kubernetes.io/projected/a91dafae-307e-4ee3-965f-1534328cf242-kube-api-access-58gkq\") pod \"obo-prometheus-operator-9bc85b4bf-7dwk9\" (UID: \"a91dafae-307e-4ee3-965f-1534328cf242\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.942696 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:03 crc kubenswrapper[5109]: I0219 00:20:03.963439 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58gkq\" (UniqueName: \"kubernetes.io/projected/a91dafae-307e-4ee3-965f-1534328cf242-kube-api-access-58gkq\") pod \"obo-prometheus-operator-9bc85b4bf-7dwk9\" (UID: \"a91dafae-307e-4ee3-965f-1534328cf242\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.044050 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.044151 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.044191 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.044294 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.048027 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.048205 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.048971 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/30150c45-319a-48be-a756-530e75c42b2d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8\" (UID: \"30150c45-319a-48be-a756-530e75c42b2d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.059026 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b479eb3f-2359-4159-ad91-4f958b238af7-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c\" (UID: \"b479eb3f-2359-4159-ad91-4f958b238af7\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.078240 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mgfrq"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.086961 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.089648 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-qgxw6\"" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.090309 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.094540 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mgfrq"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.129143 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.145549 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnz9\" (UniqueName: \"kubernetes.io/projected/a659594c-39ca-4fe7-b61b-bb074e4abc6d-kube-api-access-thnz9\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.145684 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a659594c-39ca-4fe7-b61b-bb074e4abc6d-observability-operator-tls\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.191879 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.194483 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kqlcr"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.202555 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.205899 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-gsnk6\"" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.210353 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.223879 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kqlcr"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.236126 5109 generic.go:358] "Generic (PLEG): container finished" podID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerID="0b140aa77e0d59ff3cc0a48a485c4e9144e227775048e7ad7594249cd17cdde2" exitCode=0 Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.236451 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" event={"ID":"b4d279e6-ab61-4657-a567-b007a7d707f9","Type":"ContainerDied","Data":"0b140aa77e0d59ff3cc0a48a485c4e9144e227775048e7ad7594249cd17cdde2"} Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.238234 5109 generic.go:358] "Generic (PLEG): container finished" podID="45cbaf31-5202-4d06-8328-9699984a859b" containerID="32272c92ef5aa25088e59b1a36902d221b6586475e995d09e76ff6b37c455b74" exitCode=0 Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.238296 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" event={"ID":"45cbaf31-5202-4d06-8328-9699984a859b","Type":"ContainerDied","Data":"32272c92ef5aa25088e59b1a36902d221b6586475e995d09e76ff6b37c455b74"} Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.246781 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a659594c-39ca-4fe7-b61b-bb074e4abc6d-observability-operator-tls\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.246842 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5bd03c0-434c-4adf-af86-1b5245b0a01e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.246859 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thnz9\" (UniqueName: \"kubernetes.io/projected/a659594c-39ca-4fe7-b61b-bb074e4abc6d-kube-api-access-thnz9\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.246888 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wzrf\" (UniqueName: \"kubernetes.io/projected/b5bd03c0-434c-4adf-af86-1b5245b0a01e-kube-api-access-8wzrf\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.255415 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a659594c-39ca-4fe7-b61b-bb074e4abc6d-observability-operator-tls\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.270042 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thnz9\" (UniqueName: \"kubernetes.io/projected/a659594c-39ca-4fe7-b61b-bb074e4abc6d-kube-api-access-thnz9\") pod \"observability-operator-85c68dddb-mgfrq\" (UID: \"a659594c-39ca-4fe7-b61b-bb074e4abc6d\") " pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.348694 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5bd03c0-434c-4adf-af86-1b5245b0a01e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.348802 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wzrf\" (UniqueName: \"kubernetes.io/projected/b5bd03c0-434c-4adf-af86-1b5245b0a01e-kube-api-access-8wzrf\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.350233 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/b5bd03c0-434c-4adf-af86-1b5245b0a01e-openshift-service-ca\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.371913 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wzrf\" (UniqueName: \"kubernetes.io/projected/b5bd03c0-434c-4adf-af86-1b5245b0a01e-kube-api-access-8wzrf\") pod \"perses-operator-669c9f96b5-kqlcr\" (UID: \"b5bd03c0-434c-4adf-af86-1b5245b0a01e\") " pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.389495 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9"] Feb 19 00:20:04 crc kubenswrapper[5109]: W0219 00:20:04.402196 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda91dafae_307e_4ee3_965f_1534328cf242.slice/crio-dbd720849c27d09bf84d7d07a0132d1395b302d943bab5b55da9b4097aff6118 WatchSource:0}: Error finding container dbd720849c27d09bf84d7d07a0132d1395b302d943bab5b55da9b4097aff6118: Status 404 returned error can't find the container with id dbd720849c27d09bf84d7d07a0132d1395b302d943bab5b55da9b4097aff6118 Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.409499 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.484295 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.525846 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.611726 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8"] Feb 19 00:20:04 crc kubenswrapper[5109]: W0219 00:20:04.644029 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30150c45_319a_48be_a756_530e75c42b2d.slice/crio-e27874dbcc960745a50dc98ad4dd006f3f7dbdf257105c40ab4a7bf4f4b6c9eb WatchSource:0}: Error finding container e27874dbcc960745a50dc98ad4dd006f3f7dbdf257105c40ab4a7bf4f4b6c9eb: Status 404 returned error can't find the container with id e27874dbcc960745a50dc98ad4dd006f3f7dbdf257105c40ab4a7bf4f4b6c9eb Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.777446 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-mgfrq"] Feb 19 00:20:04 crc kubenswrapper[5109]: I0219 00:20:04.819443 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-kqlcr"] Feb 19 00:20:04 crc kubenswrapper[5109]: W0219 00:20:04.828740 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5bd03c0_434c_4adf_af86_1b5245b0a01e.slice/crio-15004a1d886c2e826c91a541e77ca57d9d03c484fdda1ba365bb254eab82c2ec WatchSource:0}: Error finding container 15004a1d886c2e826c91a541e77ca57d9d03c484fdda1ba365bb254eab82c2ec: Status 404 returned error can't find the container with id 15004a1d886c2e826c91a541e77ca57d9d03c484fdda1ba365bb254eab82c2ec Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.245027 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" event={"ID":"30150c45-319a-48be-a756-530e75c42b2d","Type":"ContainerStarted","Data":"e27874dbcc960745a50dc98ad4dd006f3f7dbdf257105c40ab4a7bf4f4b6c9eb"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.246245 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" event={"ID":"b479eb3f-2359-4159-ad91-4f958b238af7","Type":"ContainerStarted","Data":"2c675c59e36a56234e1ee196cf6de8314b563417de08e707399b554b1af71b65"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.247363 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" event={"ID":"a659594c-39ca-4fe7-b61b-bb074e4abc6d","Type":"ContainerStarted","Data":"400a0ce64ab4dc1d98c3bda4f481a57e8fa4354a62c79934a35eab4cb2b20d8d"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.248477 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" event={"ID":"a91dafae-307e-4ee3-965f-1534328cf242","Type":"ContainerStarted","Data":"dbd720849c27d09bf84d7d07a0132d1395b302d943bab5b55da9b4097aff6118"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.249333 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" event={"ID":"b5bd03c0-434c-4adf-af86-1b5245b0a01e","Type":"ContainerStarted","Data":"15004a1d886c2e826c91a541e77ca57d9d03c484fdda1ba365bb254eab82c2ec"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.251284 5109 generic.go:358] "Generic (PLEG): container finished" podID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerID="55e0bc5c5a54409a3c25a2214b61c5efd79c7c299f3f4a1c403b11ebe31a12e4" exitCode=0 Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.251493 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" event={"ID":"b4d279e6-ab61-4657-a567-b007a7d707f9","Type":"ContainerDied","Data":"55e0bc5c5a54409a3c25a2214b61c5efd79c7c299f3f4a1c403b11ebe31a12e4"} Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.606376 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.673297 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8djxh\" (UniqueName: \"kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh\") pod \"45cbaf31-5202-4d06-8328-9699984a859b\" (UID: \"45cbaf31-5202-4d06-8328-9699984a859b\") " Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.683919 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh" (OuterVolumeSpecName: "kube-api-access-8djxh") pod "45cbaf31-5202-4d06-8328-9699984a859b" (UID: "45cbaf31-5202-4d06-8328-9699984a859b"). InnerVolumeSpecName "kube-api-access-8djxh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:05 crc kubenswrapper[5109]: I0219 00:20:05.780269 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8djxh\" (UniqueName: \"kubernetes.io/projected/45cbaf31-5202-4d06-8328-9699984a859b-kube-api-access-8djxh\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.286035 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.286552 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524340-dwzkg" event={"ID":"45cbaf31-5202-4d06-8328-9699984a859b","Type":"ContainerDied","Data":"6569d4ee9e56de57d3acce85b87b1447ec23a2eb29a7d664fb14eb36764e60ea"} Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.286601 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6569d4ee9e56de57d3acce85b87b1447ec23a2eb29a7d664fb14eb36764e60ea" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.593666 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.660880 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524334-7q274"] Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.667229 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524334-7q274"] Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.706008 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle\") pod \"b4d279e6-ab61-4657-a567-b007a7d707f9\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.706089 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util\") pod \"b4d279e6-ab61-4657-a567-b007a7d707f9\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.706118 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45kzk\" (UniqueName: \"kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk\") pod \"b4d279e6-ab61-4657-a567-b007a7d707f9\" (UID: \"b4d279e6-ab61-4657-a567-b007a7d707f9\") " Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.718516 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle" (OuterVolumeSpecName: "bundle") pod "b4d279e6-ab61-4657-a567-b007a7d707f9" (UID: "b4d279e6-ab61-4657-a567-b007a7d707f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.719065 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk" (OuterVolumeSpecName: "kube-api-access-45kzk") pod "b4d279e6-ab61-4657-a567-b007a7d707f9" (UID: "b4d279e6-ab61-4657-a567-b007a7d707f9"). InnerVolumeSpecName "kube-api-access-45kzk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.719279 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util" (OuterVolumeSpecName: "util") pod "b4d279e6-ab61-4657-a567-b007a7d707f9" (UID: "b4d279e6-ab61-4657-a567-b007a7d707f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.807415 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-45kzk\" (UniqueName: \"kubernetes.io/projected/b4d279e6-ab61-4657-a567-b007a7d707f9-kube-api-access-45kzk\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.807445 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.807453 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b4d279e6-ab61-4657-a567-b007a7d707f9-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:20:06 crc kubenswrapper[5109]: I0219 00:20:06.998586 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d00d4e95-b25c-4c66-8a47-ebc62d3669f8" path="/var/lib/kubelet/pods/d00d4e95-b25c-4c66-8a47-ebc62d3669f8/volumes" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.307960 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" event={"ID":"b4d279e6-ab61-4657-a567-b007a7d707f9","Type":"ContainerDied","Data":"71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec"} Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.307999 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71272c5e84ac53e327ecf204a54ade8f3299e2cc4d6ad0217604e1a8aa44eaec" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.308103 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.812790 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-5754f7d948-xp5l2"] Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813559 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45cbaf31-5202-4d06-8328-9699984a859b" containerName="oc" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813582 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="45cbaf31-5202-4d06-8328-9699984a859b" containerName="oc" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813590 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="pull" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813598 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="pull" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813658 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="util" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813667 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="util" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813680 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="extract" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.813686 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="extract" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.814331 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="45cbaf31-5202-4d06-8328-9699984a859b" containerName="oc" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.814357 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b4d279e6-ab61-4657-a567-b007a7d707f9" containerName="extract" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.822887 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-5754f7d948-xp5l2"] Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.823046 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.826157 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.826266 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.826842 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-4vqh5\"" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.827455 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.923357 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7pj4\" (UniqueName: \"kubernetes.io/projected/07c9e826-73a9-4b30-9472-05ebb7791ec2-kube-api-access-r7pj4\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.923739 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-webhook-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:07 crc kubenswrapper[5109]: I0219 00:20:07.923810 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-apiservice-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.024843 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r7pj4\" (UniqueName: \"kubernetes.io/projected/07c9e826-73a9-4b30-9472-05ebb7791ec2-kube-api-access-r7pj4\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.024928 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-webhook-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.024960 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-apiservice-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.031337 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-apiservice-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.046790 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/07c9e826-73a9-4b30-9472-05ebb7791ec2-webhook-cert\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.055412 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7pj4\" (UniqueName: \"kubernetes.io/projected/07c9e826-73a9-4b30-9472-05ebb7791ec2-kube-api-access-r7pj4\") pod \"elastic-operator-5754f7d948-xp5l2\" (UID: \"07c9e826-73a9-4b30-9472-05ebb7791ec2\") " pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.142409 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" Feb 19 00:20:08 crc kubenswrapper[5109]: I0219 00:20:08.603015 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-5754f7d948-xp5l2"] Feb 19 00:20:09 crc kubenswrapper[5109]: I0219 00:20:09.357562 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" event={"ID":"07c9e826-73a9-4b30-9472-05ebb7791ec2","Type":"ContainerStarted","Data":"a0e89783739033173ce437655517b934a6e70fd6cbf9c1b0a5c861f7d3bed11f"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.290069 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.290431 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.290487 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.291210 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.291279 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba" gracePeriod=600 Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.416092 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" event={"ID":"30150c45-319a-48be-a756-530e75c42b2d","Type":"ContainerStarted","Data":"639631018b50d6383b1516b2c8665d539c1d58669179c3087b273795f690815e"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.417626 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" event={"ID":"b479eb3f-2359-4159-ad91-4f958b238af7","Type":"ContainerStarted","Data":"c5eb38071b1710ab7ebf19d2748ed62dc756166a20bda938b7c23832e5a284ae"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.419136 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" event={"ID":"a659594c-39ca-4fe7-b61b-bb074e4abc6d","Type":"ContainerStarted","Data":"122a475dc88e12c1341429dddd583584cf05d76e2a6cbceeebd35a0cd53f01c3"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.419350 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.422191 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" event={"ID":"a91dafae-307e-4ee3-965f-1534328cf242","Type":"ContainerStarted","Data":"0c7eb459f8a706f65aab2b6fd1216030d4e94ae70bbd1241197bf159d01140ae"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.425263 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" event={"ID":"07c9e826-73a9-4b30-9472-05ebb7791ec2","Type":"ContainerStarted","Data":"ebb1ab2a4fe92a35c93823658c80674353b8b4d4428ac943281e894c0a3a0177"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.427888 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba" exitCode=0 Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.427941 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.427994 5109 scope.go:117] "RemoveContainer" containerID="5f198598dbd9b3847907465d011f415221d0681c69bc68e80c6cb600070bce5b" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.429379 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" event={"ID":"b5bd03c0-434c-4adf-af86-1b5245b0a01e","Type":"ContainerStarted","Data":"8c791d5ca065db5df29f33951464a1d903b29944632836d07129bd0b7dc259b2"} Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.429492 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.452871 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" podStartSLOduration=2.001856874 podStartE2EDuration="14.452855794s" podCreationTimestamp="2026-02-19 00:20:04 +0000 UTC" firstStartedPulling="2026-02-19 00:20:04.780151894 +0000 UTC m=+634.616391873" lastFinishedPulling="2026-02-19 00:20:17.231150804 +0000 UTC m=+647.067390793" observedRunningTime="2026-02-19 00:20:18.452429012 +0000 UTC m=+648.288669021" watchObservedRunningTime="2026-02-19 00:20:18.452855794 +0000 UTC m=+648.289095783" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.455120 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8" podStartSLOduration=2.932903123 podStartE2EDuration="15.455111729s" podCreationTimestamp="2026-02-19 00:20:03 +0000 UTC" firstStartedPulling="2026-02-19 00:20:04.647200194 +0000 UTC m=+634.483440183" lastFinishedPulling="2026-02-19 00:20:17.1694088 +0000 UTC m=+647.005648789" observedRunningTime="2026-02-19 00:20:18.435244998 +0000 UTC m=+648.271484997" watchObservedRunningTime="2026-02-19 00:20:18.455111729 +0000 UTC m=+648.291351718" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.477222 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" podStartSLOduration=2.136276466 podStartE2EDuration="14.477202864s" podCreationTimestamp="2026-02-19 00:20:04 +0000 UTC" firstStartedPulling="2026-02-19 00:20:04.831961392 +0000 UTC m=+634.668201371" lastFinishedPulling="2026-02-19 00:20:17.17288778 +0000 UTC m=+647.009127769" observedRunningTime="2026-02-19 00:20:18.473187008 +0000 UTC m=+648.309426997" watchObservedRunningTime="2026-02-19 00:20:18.477202864 +0000 UTC m=+648.313442853" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.486519 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-mgfrq" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.514312 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-7dwk9" podStartSLOduration=2.76678202 podStartE2EDuration="15.514295639s" podCreationTimestamp="2026-02-19 00:20:03 +0000 UTC" firstStartedPulling="2026-02-19 00:20:04.42115786 +0000 UTC m=+634.257397849" lastFinishedPulling="2026-02-19 00:20:17.168671479 +0000 UTC m=+647.004911468" observedRunningTime="2026-02-19 00:20:18.512800736 +0000 UTC m=+648.349040725" watchObservedRunningTime="2026-02-19 00:20:18.514295639 +0000 UTC m=+648.350535618" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.570951 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-5754f7d948-xp5l2" podStartSLOduration=2.968817136 podStartE2EDuration="11.570934887s" podCreationTimestamp="2026-02-19 00:20:07 +0000 UTC" firstStartedPulling="2026-02-19 00:20:08.641201522 +0000 UTC m=+638.477441511" lastFinishedPulling="2026-02-19 00:20:17.243319273 +0000 UTC m=+647.079559262" observedRunningTime="2026-02-19 00:20:18.534195251 +0000 UTC m=+648.370435240" watchObservedRunningTime="2026-02-19 00:20:18.570934887 +0000 UTC m=+648.407174866" Feb 19 00:20:18 crc kubenswrapper[5109]: I0219 00:20:18.571048 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c" podStartSLOduration=2.9587675669999998 podStartE2EDuration="15.57104377s" podCreationTimestamp="2026-02-19 00:20:03 +0000 UTC" firstStartedPulling="2026-02-19 00:20:04.5687302 +0000 UTC m=+634.404970189" lastFinishedPulling="2026-02-19 00:20:17.181006403 +0000 UTC m=+647.017246392" observedRunningTime="2026-02-19 00:20:18.568948409 +0000 UTC m=+648.405188398" watchObservedRunningTime="2026-02-19 00:20:18.57104377 +0000 UTC m=+648.407283749" Feb 19 00:20:19 crc kubenswrapper[5109]: I0219 00:20:19.437366 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e"} Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.441081 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz"] Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.448685 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.450822 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.450892 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-8sx5h\"" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.454067 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz"] Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.454876 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.616058 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcj8j\" (UniqueName: \"kubernetes.io/projected/f010beef-b288-4f44-8235-e6b45359a7d9-kube-api-access-lcj8j\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.616178 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f010beef-b288-4f44-8235-e6b45359a7d9-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.717198 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f010beef-b288-4f44-8235-e6b45359a7d9-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.717261 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcj8j\" (UniqueName: \"kubernetes.io/projected/f010beef-b288-4f44-8235-e6b45359a7d9-kube-api-access-lcj8j\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.717795 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f010beef-b288-4f44-8235-e6b45359a7d9-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.735022 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcj8j\" (UniqueName: \"kubernetes.io/projected/f010beef-b288-4f44-8235-e6b45359a7d9-kube-api-access-lcj8j\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-r2sqz\" (UID: \"f010beef-b288-4f44-8235-e6b45359a7d9\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.763267 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" Feb 19 00:20:21 crc kubenswrapper[5109]: I0219 00:20:21.995564 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.004601 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.007827 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.007918 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.008420 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-9mpvv\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.008494 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.008614 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.008624 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.008707 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.009959 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.010465 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.016042 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123042 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123084 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123119 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123152 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123253 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123306 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123356 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123391 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123419 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123446 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123492 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123517 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123548 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123572 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.123612 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.156990 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz"] Feb 19 00:20:22 crc kubenswrapper[5109]: W0219 00:20:22.164308 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf010beef_b288_4f44_8235_e6b45359a7d9.slice/crio-ad88a3e9e8164458221b315cca3215cca9a03e992c35f7612123459b1ff5ecce WatchSource:0}: Error finding container ad88a3e9e8164458221b315cca3215cca9a03e992c35f7612123459b1ff5ecce: Status 404 returned error can't find the container with id ad88a3e9e8164458221b315cca3215cca9a03e992c35f7612123459b1ff5ecce Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.224907 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.224961 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.224995 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225015 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225048 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225065 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225096 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225119 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225137 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225175 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225197 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225222 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225239 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225253 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225269 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.225700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226046 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226459 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226516 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226663 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226836 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.226876 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.227026 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.230584 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.230792 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.230969 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.231026 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.231032 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.231279 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.231330 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/c9c257ed-3ada-4f89-acc4-d6ef40715e7e-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"c9c257ed-3ada-4f89-acc4-d6ef40715e7e\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.323966 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.459896 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" event={"ID":"f010beef-b288-4f44-8235-e6b45359a7d9","Type":"ContainerStarted","Data":"ad88a3e9e8164458221b315cca3215cca9a03e992c35f7612123459b1ff5ecce"} Feb 19 00:20:22 crc kubenswrapper[5109]: I0219 00:20:22.752893 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:20:23 crc kubenswrapper[5109]: I0219 00:20:23.469779 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9c257ed-3ada-4f89-acc4-d6ef40715e7e","Type":"ContainerStarted","Data":"60ec983116da0b46dd659af2351f83508ce9e8a3d1c1fc02f46611d6447fd82e"} Feb 19 00:20:28 crc kubenswrapper[5109]: I0219 00:20:28.521683 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" event={"ID":"f010beef-b288-4f44-8235-e6b45359a7d9","Type":"ContainerStarted","Data":"e8b8a35d7a86ea161a11562a9139b5e233f29e45a5acfed8260ef6c7ff230b32"} Feb 19 00:20:28 crc kubenswrapper[5109]: I0219 00:20:28.543434 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-r2sqz" podStartSLOduration=2.041251004 podStartE2EDuration="7.543418043s" podCreationTimestamp="2026-02-19 00:20:21 +0000 UTC" firstStartedPulling="2026-02-19 00:20:22.166980597 +0000 UTC m=+652.003220576" lastFinishedPulling="2026-02-19 00:20:27.669147626 +0000 UTC m=+657.505387615" observedRunningTime="2026-02-19 00:20:28.542777294 +0000 UTC m=+658.379017293" watchObservedRunningTime="2026-02-19 00:20:28.543418043 +0000 UTC m=+658.379658032" Feb 19 00:20:29 crc kubenswrapper[5109]: I0219 00:20:29.441532 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-kqlcr" Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.329097 5109 scope.go:117] "RemoveContainer" containerID="14bf90bd26dc86e7e6b3251ec822d8527b75af6f5e1117fb11fba74b4b5cf44d" Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.958239 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-mgwcw"] Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.963874 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.970060 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-rbngr\"" Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.970206 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.976621 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-mgwcw"] Feb 19 00:20:31 crc kubenswrapper[5109]: I0219 00:20:31.990311 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.054273 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.054429 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqpkv\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-kube-api-access-gqpkv\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.155318 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gqpkv\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-kube-api-access-gqpkv\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.155374 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.187034 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.187182 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqpkv\" (UniqueName: \"kubernetes.io/projected/f419712c-11bd-425d-bcb7-e35869b34d49-kube-api-access-gqpkv\") pod \"cert-manager-webhook-597b96b99b-mgwcw\" (UID: \"f419712c-11bd-425d-bcb7-e35869b34d49\") " pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:32 crc kubenswrapper[5109]: I0219 00:20:32.300321 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:35 crc kubenswrapper[5109]: I0219 00:20:35.658512 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-mgwcw"] Feb 19 00:20:35 crc kubenswrapper[5109]: W0219 00:20:35.663776 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf419712c_11bd_425d_bcb7_e35869b34d49.slice/crio-d55e4102784b70084f6159ab86ba92478f3ccf87202b0abcc7fbddffa8657af7 WatchSource:0}: Error finding container d55e4102784b70084f6159ab86ba92478f3ccf87202b0abcc7fbddffa8657af7: Status 404 returned error can't find the container with id d55e4102784b70084f6159ab86ba92478f3ccf87202b0abcc7fbddffa8657af7 Feb 19 00:20:36 crc kubenswrapper[5109]: I0219 00:20:36.568911 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9c257ed-3ada-4f89-acc4-d6ef40715e7e","Type":"ContainerStarted","Data":"b28a9d67ce71bde98b794e802e188cbad73e5a572612c7173cbccbf632c288f6"} Feb 19 00:20:36 crc kubenswrapper[5109]: I0219 00:20:36.570474 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" event={"ID":"f419712c-11bd-425d-bcb7-e35869b34d49","Type":"ContainerStarted","Data":"d55e4102784b70084f6159ab86ba92478f3ccf87202b0abcc7fbddffa8657af7"} Feb 19 00:20:36 crc kubenswrapper[5109]: I0219 00:20:36.918054 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:20:36 crc kubenswrapper[5109]: I0219 00:20:36.960191 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.564267 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-kzv2n"] Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.570445 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.570967 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-kzv2n"] Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.573209 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-nq4d4\"" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.637228 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssbdw\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-kube-api-access-ssbdw\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.638802 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.740465 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssbdw\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-kube-api-access-ssbdw\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.740582 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.763114 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.763651 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssbdw\" (UniqueName: \"kubernetes.io/projected/66f4d41b-0b12-427b-8882-f81b5d18b662-kube-api-access-ssbdw\") pod \"cert-manager-cainjector-8966b78d4-kzv2n\" (UID: \"66f4d41b-0b12-427b-8882-f81b5d18b662\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:37 crc kubenswrapper[5109]: I0219 00:20:37.887935 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" Feb 19 00:20:38 crc kubenswrapper[5109]: I0219 00:20:38.283505 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-kzv2n"] Feb 19 00:20:38 crc kubenswrapper[5109]: I0219 00:20:38.584853 5109 generic.go:358] "Generic (PLEG): container finished" podID="c9c257ed-3ada-4f89-acc4-d6ef40715e7e" containerID="b28a9d67ce71bde98b794e802e188cbad73e5a572612c7173cbccbf632c288f6" exitCode=0 Feb 19 00:20:38 crc kubenswrapper[5109]: I0219 00:20:38.585161 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9c257ed-3ada-4f89-acc4-d6ef40715e7e","Type":"ContainerDied","Data":"b28a9d67ce71bde98b794e802e188cbad73e5a572612c7173cbccbf632c288f6"} Feb 19 00:20:39 crc kubenswrapper[5109]: W0219 00:20:39.448014 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66f4d41b_0b12_427b_8882_f81b5d18b662.slice/crio-abd6a63829d68cf6d39f93ef6439619486aae02bdc93ab395c5c33cb1150d5ee WatchSource:0}: Error finding container abd6a63829d68cf6d39f93ef6439619486aae02bdc93ab395c5c33cb1150d5ee: Status 404 returned error can't find the container with id abd6a63829d68cf6d39f93ef6439619486aae02bdc93ab395c5c33cb1150d5ee Feb 19 00:20:39 crc kubenswrapper[5109]: I0219 00:20:39.591788 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" event={"ID":"66f4d41b-0b12-427b-8882-f81b5d18b662","Type":"ContainerStarted","Data":"abd6a63829d68cf6d39f93ef6439619486aae02bdc93ab395c5c33cb1150d5ee"} Feb 19 00:20:40 crc kubenswrapper[5109]: I0219 00:20:40.602380 5109 generic.go:358] "Generic (PLEG): container finished" podID="c9c257ed-3ada-4f89-acc4-d6ef40715e7e" containerID="ef1274f90933950d112e9d261f06c076b27f57559acdcd4b9b54c7d6df25588e" exitCode=0 Feb 19 00:20:40 crc kubenswrapper[5109]: I0219 00:20:40.602535 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9c257ed-3ada-4f89-acc4-d6ef40715e7e","Type":"ContainerDied","Data":"ef1274f90933950d112e9d261f06c076b27f57559acdcd4b9b54c7d6df25588e"} Feb 19 00:20:40 crc kubenswrapper[5109]: I0219 00:20:40.606459 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" event={"ID":"f419712c-11bd-425d-bcb7-e35869b34d49","Type":"ContainerStarted","Data":"c5978aa64c79c0bf21f5123b5b68e0f6e921731aac1811baffb74d5811f0bb02"} Feb 19 00:20:40 crc kubenswrapper[5109]: I0219 00:20:40.606591 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:40 crc kubenswrapper[5109]: I0219 00:20:40.696525 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" podStartSLOduration=5.842737174 podStartE2EDuration="9.696494831s" podCreationTimestamp="2026-02-19 00:20:31 +0000 UTC" firstStartedPulling="2026-02-19 00:20:35.665327599 +0000 UTC m=+665.501567598" lastFinishedPulling="2026-02-19 00:20:39.519085256 +0000 UTC m=+669.355325255" observedRunningTime="2026-02-19 00:20:40.687484079 +0000 UTC m=+670.523724118" watchObservedRunningTime="2026-02-19 00:20:40.696494831 +0000 UTC m=+670.532734880" Feb 19 00:20:46 crc kubenswrapper[5109]: I0219 00:20:46.618491 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-mgwcw" Feb 19 00:20:46 crc kubenswrapper[5109]: I0219 00:20:46.653522 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"c9c257ed-3ada-4f89-acc4-d6ef40715e7e","Type":"ContainerStarted","Data":"c47f15a86c0dc54bd3e05cf916490630ae8a84612fc484feede9a09382dcb8d0"} Feb 19 00:20:46 crc kubenswrapper[5109]: I0219 00:20:46.654507 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:20:46 crc kubenswrapper[5109]: I0219 00:20:46.722498 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=13.085764615 podStartE2EDuration="25.722463129s" podCreationTimestamp="2026-02-19 00:20:21 +0000 UTC" firstStartedPulling="2026-02-19 00:20:22.762795752 +0000 UTC m=+652.599035741" lastFinishedPulling="2026-02-19 00:20:35.399494266 +0000 UTC m=+665.235734255" observedRunningTime="2026-02-19 00:20:46.713011913 +0000 UTC m=+676.549251912" watchObservedRunningTime="2026-02-19 00:20:46.722463129 +0000 UTC m=+676.558703158" Feb 19 00:20:47 crc kubenswrapper[5109]: I0219 00:20:47.661302 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" event={"ID":"66f4d41b-0b12-427b-8882-f81b5d18b662","Type":"ContainerStarted","Data":"be6a8ae017bbe4e0292276ede9c999843cad426fd6a8d927b8d02d62d160fb9d"} Feb 19 00:20:47 crc kubenswrapper[5109]: I0219 00:20:47.680206 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-kzv2n" podStartSLOduration=3.388265494 podStartE2EDuration="10.680185019s" podCreationTimestamp="2026-02-19 00:20:37 +0000 UTC" firstStartedPulling="2026-02-19 00:20:39.450678641 +0000 UTC m=+669.286918630" lastFinishedPulling="2026-02-19 00:20:46.742598166 +0000 UTC m=+676.578838155" observedRunningTime="2026-02-19 00:20:47.674899755 +0000 UTC m=+677.511139754" watchObservedRunningTime="2026-02-19 00:20:47.680185019 +0000 UTC m=+677.516425018" Feb 19 00:20:50 crc kubenswrapper[5109]: I0219 00:20:50.983208 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-hc5g9"] Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.009377 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.018355 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-z4z27\"" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.032444 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-hc5g9"] Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.048205 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghqd2\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-kube-api-access-ghqd2\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.048321 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-bound-sa-token\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.150135 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-bound-sa-token\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.150276 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghqd2\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-kube-api-access-ghqd2\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.178628 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-bound-sa-token\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.184367 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghqd2\" (UniqueName: \"kubernetes.io/projected/7be08e5e-17a1-4333-b9ae-89730a5b2da3-kube-api-access-ghqd2\") pod \"cert-manager-759f64656b-hc5g9\" (UID: \"7be08e5e-17a1-4333-b9ae-89730a5b2da3\") " pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.337847 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-hc5g9" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.631835 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.643162 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.645702 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.648976 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.758657 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.758713 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m77cz\" (UniqueName: \"kubernetes.io/projected/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-kube-api-access-m77cz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.758847 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.834717 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-hc5g9"] Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.860917 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.861007 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m77cz\" (UniqueName: \"kubernetes.io/projected/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-kube-api-access-m77cz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.861137 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.862057 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.863224 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.885412 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m77cz\" (UniqueName: \"kubernetes.io/projected/0ed65cf0-e4b3-4b22-8873-f55fae1e7043-kube-api-access-m77cz\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"0ed65cf0-e4b3-4b22-8873-f55fae1e7043\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:51 crc kubenswrapper[5109]: I0219 00:20:51.971384 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 19 00:20:52 crc kubenswrapper[5109]: I0219 00:20:52.381608 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 19 00:20:52 crc kubenswrapper[5109]: W0219 00:20:52.384289 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ed65cf0_e4b3_4b22_8873_f55fae1e7043.slice/crio-0d5c684720e24f2ab3be1f17baa5aace2b3bf0c637fab0eb11c9c1136cfa487d WatchSource:0}: Error finding container 0d5c684720e24f2ab3be1f17baa5aace2b3bf0c637fab0eb11c9c1136cfa487d: Status 404 returned error can't find the container with id 0d5c684720e24f2ab3be1f17baa5aace2b3bf0c637fab0eb11c9c1136cfa487d Feb 19 00:20:52 crc kubenswrapper[5109]: I0219 00:20:52.694910 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-hc5g9" event={"ID":"7be08e5e-17a1-4333-b9ae-89730a5b2da3","Type":"ContainerStarted","Data":"9c46b7bad1dc84db99baeed484c96faa8514f11338256318c469b993aa628b4a"} Feb 19 00:20:52 crc kubenswrapper[5109]: I0219 00:20:52.695274 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-hc5g9" event={"ID":"7be08e5e-17a1-4333-b9ae-89730a5b2da3","Type":"ContainerStarted","Data":"238c1e73179778c910afaa196fb7d6afc9c8a5c4461eaa49aade76012e12182d"} Feb 19 00:20:52 crc kubenswrapper[5109]: I0219 00:20:52.697675 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0ed65cf0-e4b3-4b22-8873-f55fae1e7043","Type":"ContainerStarted","Data":"0d5c684720e24f2ab3be1f17baa5aace2b3bf0c637fab0eb11c9c1136cfa487d"} Feb 19 00:20:52 crc kubenswrapper[5109]: I0219 00:20:52.720290 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-hc5g9" podStartSLOduration=2.720267002 podStartE2EDuration="2.720267002s" podCreationTimestamp="2026-02-19 00:20:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:20:52.714495394 +0000 UTC m=+682.550735393" watchObservedRunningTime="2026-02-19 00:20:52.720267002 +0000 UTC m=+682.556507001" Feb 19 00:20:57 crc kubenswrapper[5109]: I0219 00:20:57.733439 5109 generic.go:358] "Generic (PLEG): container finished" podID="0ed65cf0-e4b3-4b22-8873-f55fae1e7043" containerID="52f1617deccd6bbec85192904d887242d62f2f0f61bfe5219e87d7a3db9cb94e" exitCode=0 Feb 19 00:20:57 crc kubenswrapper[5109]: I0219 00:20:57.733551 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0ed65cf0-e4b3-4b22-8873-f55fae1e7043","Type":"ContainerDied","Data":"52f1617deccd6bbec85192904d887242d62f2f0f61bfe5219e87d7a3db9cb94e"} Feb 19 00:20:58 crc kubenswrapper[5109]: I0219 00:20:58.787741 5109 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="c9c257ed-3ada-4f89-acc4-d6ef40715e7e" containerName="elasticsearch" probeResult="failure" output=< Feb 19 00:20:58 crc kubenswrapper[5109]: {"timestamp": "2026-02-19T00:20:58+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 19 00:20:58 crc kubenswrapper[5109]: > Feb 19 00:21:01 crc kubenswrapper[5109]: I0219 00:21:01.762220 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"0ed65cf0-e4b3-4b22-8873-f55fae1e7043","Type":"ContainerStarted","Data":"bbf1611960ab83561e0e94623d79e671bdc4df2d5d98013db56945c8248fdb37"} Feb 19 00:21:01 crc kubenswrapper[5109]: I0219 00:21:01.800424 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=2.383695779 podStartE2EDuration="10.797928544s" podCreationTimestamp="2026-02-19 00:20:51 +0000 UTC" firstStartedPulling="2026-02-19 00:20:52.38726326 +0000 UTC m=+682.223503249" lastFinishedPulling="2026-02-19 00:21:00.801496025 +0000 UTC m=+690.637736014" observedRunningTime="2026-02-19 00:21:01.793465064 +0000 UTC m=+691.629705123" watchObservedRunningTime="2026-02-19 00:21:01.797928544 +0000 UTC m=+691.634168573" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.686261 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw"] Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.692343 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.704305 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw"] Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.831849 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.832032 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.832100 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47jkz\" (UniqueName: \"kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.933097 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.933193 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-47jkz\" (UniqueName: \"kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.933309 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.934047 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.934076 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.947954 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 19 00:21:03 crc kubenswrapper[5109]: I0219 00:21:03.965666 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-47jkz\" (UniqueName: \"kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:04 crc kubenswrapper[5109]: I0219 00:21:04.017654 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:04 crc kubenswrapper[5109]: I0219 00:21:04.464209 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw"] Feb 19 00:21:04 crc kubenswrapper[5109]: W0219 00:21:04.476009 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8690eeea_9b4f_4617_8d5d_99d65ceb2090.slice/crio-5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c WatchSource:0}: Error finding container 5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c: Status 404 returned error can't find the container with id 5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c Feb 19 00:21:04 crc kubenswrapper[5109]: I0219 00:21:04.784442 5109 generic.go:358] "Generic (PLEG): container finished" podID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerID="650dffe1ed6c19932ff0ca2614d38c524a1cc8bd645d1018c26a3e0433be034e" exitCode=0 Feb 19 00:21:04 crc kubenswrapper[5109]: I0219 00:21:04.784542 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" event={"ID":"8690eeea-9b4f-4617-8d5d-99d65ceb2090","Type":"ContainerDied","Data":"650dffe1ed6c19932ff0ca2614d38c524a1cc8bd645d1018c26a3e0433be034e"} Feb 19 00:21:04 crc kubenswrapper[5109]: I0219 00:21:04.784919 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" event={"ID":"8690eeea-9b4f-4617-8d5d-99d65ceb2090","Type":"ContainerStarted","Data":"5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c"} Feb 19 00:21:06 crc kubenswrapper[5109]: I0219 00:21:06.817910 5109 generic.go:358] "Generic (PLEG): container finished" podID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerID="d82cb9511beb3e40f6ab487cfeea8c7c70469416435b09427a617e5b8402316a" exitCode=0 Feb 19 00:21:06 crc kubenswrapper[5109]: I0219 00:21:06.818394 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" event={"ID":"8690eeea-9b4f-4617-8d5d-99d65ceb2090","Type":"ContainerDied","Data":"d82cb9511beb3e40f6ab487cfeea8c7c70469416435b09427a617e5b8402316a"} Feb 19 00:21:07 crc kubenswrapper[5109]: I0219 00:21:07.830817 5109 generic.go:358] "Generic (PLEG): container finished" podID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerID="46ff110d7f3db03e5d9180f937dbf29f914c5497239eb666692373bf42a93a1c" exitCode=0 Feb 19 00:21:07 crc kubenswrapper[5109]: I0219 00:21:07.830934 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" event={"ID":"8690eeea-9b4f-4617-8d5d-99d65ceb2090","Type":"ContainerDied","Data":"46ff110d7f3db03e5d9180f937dbf29f914c5497239eb666692373bf42a93a1c"} Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.180747 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.206977 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle\") pod \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.208262 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle" (OuterVolumeSpecName: "bundle") pod "8690eeea-9b4f-4617-8d5d-99d65ceb2090" (UID: "8690eeea-9b4f-4617-8d5d-99d65ceb2090"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.308357 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util\") pod \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.308572 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47jkz\" (UniqueName: \"kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz\") pod \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\" (UID: \"8690eeea-9b4f-4617-8d5d-99d65ceb2090\") " Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.309002 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.316579 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz" (OuterVolumeSpecName: "kube-api-access-47jkz") pod "8690eeea-9b4f-4617-8d5d-99d65ceb2090" (UID: "8690eeea-9b4f-4617-8d5d-99d65ceb2090"). InnerVolumeSpecName "kube-api-access-47jkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.326510 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util" (OuterVolumeSpecName: "util") pod "8690eeea-9b4f-4617-8d5d-99d65ceb2090" (UID: "8690eeea-9b4f-4617-8d5d-99d65ceb2090"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.409696 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/8690eeea-9b4f-4617-8d5d-99d65ceb2090-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.409735 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-47jkz\" (UniqueName: \"kubernetes.io/projected/8690eeea-9b4f-4617-8d5d-99d65ceb2090-kube-api-access-47jkz\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.852289 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.852356 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad976661h8nnw" event={"ID":"8690eeea-9b4f-4617-8d5d-99d65ceb2090","Type":"ContainerDied","Data":"5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c"} Feb 19 00:21:09 crc kubenswrapper[5109]: I0219 00:21:09.852428 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cce14ba1c5b5a9b7da3da4b7b96e77daa83c22bb50c1a508f7160a1bb61d50c" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.747370 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-64rz7"] Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748353 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="util" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748383 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="util" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748410 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="pull" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748418 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="pull" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748434 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="extract" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748443 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="extract" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.748561 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="8690eeea-9b4f-4617-8d5d-99d65ceb2090" containerName="extract" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.752203 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.755860 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-dwsmr\"" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.757369 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-64rz7"] Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.773592 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/16f870f2-494d-439e-a72c-73446c158d32-runner\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.773776 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9xs7\" (UniqueName: \"kubernetes.io/projected/16f870f2-494d-439e-a72c-73446c158d32-kube-api-access-n9xs7\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.875191 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n9xs7\" (UniqueName: \"kubernetes.io/projected/16f870f2-494d-439e-a72c-73446c158d32-kube-api-access-n9xs7\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.875576 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/16f870f2-494d-439e-a72c-73446c158d32-runner\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.876028 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/16f870f2-494d-439e-a72c-73446c158d32-runner\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:13 crc kubenswrapper[5109]: I0219 00:21:13.893615 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9xs7\" (UniqueName: \"kubernetes.io/projected/16f870f2-494d-439e-a72c-73446c158d32-kube-api-access-n9xs7\") pod \"smart-gateway-operator-97b85656c-64rz7\" (UID: \"16f870f2-494d-439e-a72c-73446c158d32\") " pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:14 crc kubenswrapper[5109]: I0219 00:21:14.069594 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" Feb 19 00:21:14 crc kubenswrapper[5109]: I0219 00:21:14.289411 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-64rz7"] Feb 19 00:21:14 crc kubenswrapper[5109]: W0219 00:21:14.290258 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16f870f2_494d_439e_a72c_73446c158d32.slice/crio-ddab3a2abe07bbe161c5442f6c2ee97594482500ea3d6c4b8844d622278d4c5e WatchSource:0}: Error finding container ddab3a2abe07bbe161c5442f6c2ee97594482500ea3d6c4b8844d622278d4c5e: Status 404 returned error can't find the container with id ddab3a2abe07bbe161c5442f6c2ee97594482500ea3d6c4b8844d622278d4c5e Feb 19 00:21:14 crc kubenswrapper[5109]: I0219 00:21:14.885667 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" event={"ID":"16f870f2-494d-439e-a72c-73446c158d32","Type":"ContainerStarted","Data":"ddab3a2abe07bbe161c5442f6c2ee97594482500ea3d6c4b8844d622278d4c5e"} Feb 19 00:21:28 crc kubenswrapper[5109]: I0219 00:21:28.978220 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" event={"ID":"16f870f2-494d-439e-a72c-73446c158d32","Type":"ContainerStarted","Data":"8f6bf24d915d53fc5fcafe36da11b627dc6a4fe4dee4bf45894e472df2429ec5"} Feb 19 00:21:29 crc kubenswrapper[5109]: I0219 00:21:29.001412 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-64rz7" podStartSLOduration=2.351651216 podStartE2EDuration="16.001385324s" podCreationTimestamp="2026-02-19 00:21:13 +0000 UTC" firstStartedPulling="2026-02-19 00:21:14.291134353 +0000 UTC m=+704.127374352" lastFinishedPulling="2026-02-19 00:21:27.940868471 +0000 UTC m=+717.777108460" observedRunningTime="2026-02-19 00:21:28.995965244 +0000 UTC m=+718.832205273" watchObservedRunningTime="2026-02-19 00:21:29.001385324 +0000 UTC m=+718.837625353" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.351685 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.392413 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.392769 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.396910 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.490038 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.490196 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd784\" (UniqueName: \"kubernetes.io/projected/64cb204c-af9a-4b13-badf-5e8b964cf490-kube-api-access-sd784\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.490285 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.591912 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sd784\" (UniqueName: \"kubernetes.io/projected/64cb204c-af9a-4b13-badf-5e8b964cf490-kube-api-access-sd784\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.592065 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.592208 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.592945 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.593496 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/64cb204c-af9a-4b13-badf-5e8b964cf490-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.625324 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd784\" (UniqueName: \"kubernetes.io/projected/64cb204c-af9a-4b13-badf-5e8b964cf490-kube-api-access-sd784\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"64cb204c-af9a-4b13-badf-5e8b964cf490\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:49 crc kubenswrapper[5109]: I0219 00:21:49.713390 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 19 00:21:50 crc kubenswrapper[5109]: I0219 00:21:50.154754 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 19 00:21:50 crc kubenswrapper[5109]: W0219 00:21:50.157131 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64cb204c_af9a_4b13_badf_5e8b964cf490.slice/crio-58653e0c55b2bb8826e11066aad4587c2aa91fcc7b9b3ebe0aeed65f8eb05392 WatchSource:0}: Error finding container 58653e0c55b2bb8826e11066aad4587c2aa91fcc7b9b3ebe0aeed65f8eb05392: Status 404 returned error can't find the container with id 58653e0c55b2bb8826e11066aad4587c2aa91fcc7b9b3ebe0aeed65f8eb05392 Feb 19 00:21:51 crc kubenswrapper[5109]: I0219 00:21:51.159290 5109 generic.go:358] "Generic (PLEG): container finished" podID="64cb204c-af9a-4b13-badf-5e8b964cf490" containerID="13539192f2aa8ae1a3b36589b4a3e4aa5e25ddbe456a58e8a0789c74d9cc0a3e" exitCode=0 Feb 19 00:21:51 crc kubenswrapper[5109]: I0219 00:21:51.160191 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"64cb204c-af9a-4b13-badf-5e8b964cf490","Type":"ContainerDied","Data":"13539192f2aa8ae1a3b36589b4a3e4aa5e25ddbe456a58e8a0789c74d9cc0a3e"} Feb 19 00:21:51 crc kubenswrapper[5109]: I0219 00:21:51.160248 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"64cb204c-af9a-4b13-badf-5e8b964cf490","Type":"ContainerStarted","Data":"58653e0c55b2bb8826e11066aad4587c2aa91fcc7b9b3ebe0aeed65f8eb05392"} Feb 19 00:21:52 crc kubenswrapper[5109]: I0219 00:21:52.169891 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"64cb204c-af9a-4b13-badf-5e8b964cf490","Type":"ContainerStarted","Data":"f5f8103b22bc642a2cea541579fb8a7315375df620d9b54ddf74e1e948b0931a"} Feb 19 00:21:52 crc kubenswrapper[5109]: I0219 00:21:52.190968 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.71866824 podStartE2EDuration="3.190949723s" podCreationTimestamp="2026-02-19 00:21:49 +0000 UTC" firstStartedPulling="2026-02-19 00:21:51.161029649 +0000 UTC m=+740.997269678" lastFinishedPulling="2026-02-19 00:21:51.633311172 +0000 UTC m=+741.469551161" observedRunningTime="2026-02-19 00:21:52.188846035 +0000 UTC m=+742.025086024" watchObservedRunningTime="2026-02-19 00:21:52.190949723 +0000 UTC m=+742.027189712" Feb 19 00:21:53 crc kubenswrapper[5109]: I0219 00:21:53.763756 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg"] Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.042090 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg"] Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.042375 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.053567 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.053682 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvv8k\" (UniqueName: \"kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.053730 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.155504 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.155579 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zvv8k\" (UniqueName: \"kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.155609 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.156141 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.156199 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.180507 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvv8k\" (UniqueName: \"kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.358714 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.403962 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q"] Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.624685 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q"] Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.625003 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg"] Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.624846 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.626961 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.661914 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.662121 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.662314 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlqh2\" (UniqueName: \"kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.764567 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.765333 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.765656 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlqh2\" (UniqueName: \"kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.765770 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.766184 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.798283 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlqh2\" (UniqueName: \"kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:54 crc kubenswrapper[5109]: I0219 00:21:54.959090 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:21:55 crc kubenswrapper[5109]: I0219 00:21:55.192418 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" event={"ID":"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2","Type":"ContainerStarted","Data":"94462b76b5e4190aade30b496a0d38531764ab987374a80bcb0bdb91810d0da5"} Feb 19 00:21:55 crc kubenswrapper[5109]: I0219 00:21:55.220653 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q"] Feb 19 00:21:56 crc kubenswrapper[5109]: I0219 00:21:56.203890 5109 generic.go:358] "Generic (PLEG): container finished" podID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerID="ab8ce718c1d68d75252e3d95cffe15ba8eda183b73e3d231d21021c4ea7cb7fd" exitCode=0 Feb 19 00:21:56 crc kubenswrapper[5109]: I0219 00:21:56.204041 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" event={"ID":"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2","Type":"ContainerDied","Data":"ab8ce718c1d68d75252e3d95cffe15ba8eda183b73e3d231d21021c4ea7cb7fd"} Feb 19 00:21:56 crc kubenswrapper[5109]: I0219 00:21:56.209477 5109 generic.go:358] "Generic (PLEG): container finished" podID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerID="bb1c2c276a582e5f7c967d90925ff1dded43adf1944b2f31be32635276e3d3d9" exitCode=0 Feb 19 00:21:56 crc kubenswrapper[5109]: I0219 00:21:56.209543 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" event={"ID":"19f51d62-ca7e-40d4-9aa3-1a53dc412fea","Type":"ContainerDied","Data":"bb1c2c276a582e5f7c967d90925ff1dded43adf1944b2f31be32635276e3d3d9"} Feb 19 00:21:56 crc kubenswrapper[5109]: I0219 00:21:56.209580 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" event={"ID":"19f51d62-ca7e-40d4-9aa3-1a53dc412fea","Type":"ContainerStarted","Data":"138eeb6efa2a3f46cca7ff50366b11cd51d61f9cf64dbcb490b7bac4b05c5017"} Feb 19 00:21:57 crc kubenswrapper[5109]: I0219 00:21:57.220843 5109 generic.go:358] "Generic (PLEG): container finished" podID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerID="932bd430427cdf09a7e4d95d05c6680408ce9beca7fe94386e3db900b9ce40a5" exitCode=0 Feb 19 00:21:57 crc kubenswrapper[5109]: I0219 00:21:57.220923 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" event={"ID":"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2","Type":"ContainerDied","Data":"932bd430427cdf09a7e4d95d05c6680408ce9beca7fe94386e3db900b9ce40a5"} Feb 19 00:21:57 crc kubenswrapper[5109]: I0219 00:21:57.921576 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:21:57 crc kubenswrapper[5109]: I0219 00:21:57.928510 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:57 crc kubenswrapper[5109]: I0219 00:21:57.938619 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.011815 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.011890 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.011928 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5bfb\" (UniqueName: \"kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.113528 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5bfb\" (UniqueName: \"kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.113668 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.113710 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.114167 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.114225 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.140786 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5bfb\" (UniqueName: \"kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb\") pod \"redhat-operators-nwpft\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.229991 5109 generic.go:358] "Generic (PLEG): container finished" podID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerID="4a0aa7e5ae4d96ac9d75f03b76fb4269ddb64e2ac88fd25f438c5d6c42a0acad" exitCode=0 Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.230115 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" event={"ID":"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2","Type":"ContainerDied","Data":"4a0aa7e5ae4d96ac9d75f03b76fb4269ddb64e2ac88fd25f438c5d6c42a0acad"} Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.232262 5109 generic.go:358] "Generic (PLEG): container finished" podID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerID="e37acea77af5dc42589b8c8e09bd4bff9b761516f2c7e14e0be337b8fe7da29b" exitCode=0 Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.232304 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" event={"ID":"19f51d62-ca7e-40d4-9aa3-1a53dc412fea","Type":"ContainerDied","Data":"e37acea77af5dc42589b8c8e09bd4bff9b761516f2c7e14e0be337b8fe7da29b"} Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.248268 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:21:58 crc kubenswrapper[5109]: I0219 00:21:58.671793 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:21:58 crc kubenswrapper[5109]: W0219 00:21:58.679495 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6c36f1c_6a11_4867_a0b7_3b9f60510b87.slice/crio-7974e461709d4790eb78421bef335b797ef56c4fde8fa80f620ce3d2651ebcdf WatchSource:0}: Error finding container 7974e461709d4790eb78421bef335b797ef56c4fde8fa80f620ce3d2651ebcdf: Status 404 returned error can't find the container with id 7974e461709d4790eb78421bef335b797ef56c4fde8fa80f620ce3d2651ebcdf Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.244029 5109 generic.go:358] "Generic (PLEG): container finished" podID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerID="44b66cb20767ad5c0ab5c529149d80b36ae4edf2a5d794412a464102aef4dfec" exitCode=0 Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.244097 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" event={"ID":"19f51d62-ca7e-40d4-9aa3-1a53dc412fea","Type":"ContainerDied","Data":"44b66cb20767ad5c0ab5c529149d80b36ae4edf2a5d794412a464102aef4dfec"} Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.246664 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerStarted","Data":"7974e461709d4790eb78421bef335b797ef56c4fde8fa80f620ce3d2651ebcdf"} Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.588438 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.633022 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle\") pod \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.633102 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util\") pod \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.633165 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvv8k\" (UniqueName: \"kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k\") pod \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\" (UID: \"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2\") " Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.633755 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle" (OuterVolumeSpecName: "bundle") pod "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" (UID: "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.641186 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k" (OuterVolumeSpecName: "kube-api-access-zvv8k") pod "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" (UID: "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2"). InnerVolumeSpecName "kube-api-access-zvv8k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.647111 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util" (OuterVolumeSpecName: "util") pod "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" (UID: "9f7762b8-ce37-43d6-b828-2f8e87d0a0f2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.734900 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.734930 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:21:59 crc kubenswrapper[5109]: I0219 00:21:59.734944 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zvv8k\" (UniqueName: \"kubernetes.io/projected/9f7762b8-ce37-43d6-b828-2f8e87d0a0f2-kube-api-access-zvv8k\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.136938 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524342-j7jg7"] Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138126 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="extract" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138152 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="extract" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138176 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="util" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138186 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="util" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138212 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="pull" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138220 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="pull" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.138361 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="9f7762b8-ce37-43d6-b828-2f8e87d0a0f2" containerName="extract" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.142137 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.144677 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.146672 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.147102 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.149587 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-j7jg7"] Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.243145 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p57vb\" (UniqueName: \"kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb\") pod \"auto-csr-approver-29524342-j7jg7\" (UID: \"14a5ab9a-a49c-43fd-855c-a409b8c60e2c\") " pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.256530 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" event={"ID":"9f7762b8-ce37-43d6-b828-2f8e87d0a0f2","Type":"ContainerDied","Data":"94462b76b5e4190aade30b496a0d38531764ab987374a80bcb0bdb91810d0da5"} Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.256591 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d572wjvlg" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.256623 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94462b76b5e4190aade30b496a0d38531764ab987374a80bcb0bdb91810d0da5" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.258948 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerID="7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef" exitCode=0 Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.259122 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerDied","Data":"7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef"} Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.344844 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p57vb\" (UniqueName: \"kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb\") pod \"auto-csr-approver-29524342-j7jg7\" (UID: \"14a5ab9a-a49c-43fd-855c-a409b8c60e2c\") " pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.365795 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p57vb\" (UniqueName: \"kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb\") pod \"auto-csr-approver-29524342-j7jg7\" (UID: \"14a5ab9a-a49c-43fd-855c-a409b8c60e2c\") " pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.472901 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.722328 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.751265 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util\") pod \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.751327 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlqh2\" (UniqueName: \"kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2\") pod \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.751453 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle\") pod \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\" (UID: \"19f51d62-ca7e-40d4-9aa3-1a53dc412fea\") " Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.752537 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle" (OuterVolumeSpecName: "bundle") pod "19f51d62-ca7e-40d4-9aa3-1a53dc412fea" (UID: "19f51d62-ca7e-40d4-9aa3-1a53dc412fea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.757213 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2" (OuterVolumeSpecName: "kube-api-access-wlqh2") pod "19f51d62-ca7e-40d4-9aa3-1a53dc412fea" (UID: "19f51d62-ca7e-40d4-9aa3-1a53dc412fea"). InnerVolumeSpecName "kube-api-access-wlqh2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.767561 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util" (OuterVolumeSpecName: "util") pod "19f51d62-ca7e-40d4-9aa3-1a53dc412fea" (UID: "19f51d62-ca7e-40d4-9aa3-1a53dc412fea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.854036 5109 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-bundle\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.854117 5109 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-util\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.854146 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlqh2\" (UniqueName: \"kubernetes.io/projected/19f51d62-ca7e-40d4-9aa3-1a53dc412fea-kube-api-access-wlqh2\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:00 crc kubenswrapper[5109]: I0219 00:22:00.959369 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-j7jg7"] Feb 19 00:22:00 crc kubenswrapper[5109]: W0219 00:22:00.972580 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14a5ab9a_a49c_43fd_855c_a409b8c60e2c.slice/crio-a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b WatchSource:0}: Error finding container a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b: Status 404 returned error can't find the container with id a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b Feb 19 00:22:01 crc kubenswrapper[5109]: I0219 00:22:01.270316 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" event={"ID":"14a5ab9a-a49c-43fd-855c-a409b8c60e2c","Type":"ContainerStarted","Data":"a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b"} Feb 19 00:22:01 crc kubenswrapper[5109]: I0219 00:22:01.273827 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" Feb 19 00:22:01 crc kubenswrapper[5109]: I0219 00:22:01.273853 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q" event={"ID":"19f51d62-ca7e-40d4-9aa3-1a53dc412fea","Type":"ContainerDied","Data":"138eeb6efa2a3f46cca7ff50366b11cd51d61f9cf64dbcb490b7bac4b05c5017"} Feb 19 00:22:01 crc kubenswrapper[5109]: I0219 00:22:01.273889 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138eeb6efa2a3f46cca7ff50366b11cd51d61f9cf64dbcb490b7bac4b05c5017" Feb 19 00:22:01 crc kubenswrapper[5109]: I0219 00:22:01.276783 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerStarted","Data":"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2"} Feb 19 00:22:02 crc kubenswrapper[5109]: I0219 00:22:02.285790 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerID="25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2" exitCode=0 Feb 19 00:22:02 crc kubenswrapper[5109]: I0219 00:22:02.285963 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerDied","Data":"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2"} Feb 19 00:22:03 crc kubenswrapper[5109]: I0219 00:22:03.302422 5109 generic.go:358] "Generic (PLEG): container finished" podID="14a5ab9a-a49c-43fd-855c-a409b8c60e2c" containerID="5f58160f09a5b90dba930a51dfc3c90c52d0dff61c933b5eb87d03ab962a25f6" exitCode=0 Feb 19 00:22:03 crc kubenswrapper[5109]: I0219 00:22:03.302582 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" event={"ID":"14a5ab9a-a49c-43fd-855c-a409b8c60e2c","Type":"ContainerDied","Data":"5f58160f09a5b90dba930a51dfc3c90c52d0dff61c933b5eb87d03ab962a25f6"} Feb 19 00:22:03 crc kubenswrapper[5109]: I0219 00:22:03.308796 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerStarted","Data":"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19"} Feb 19 00:22:03 crc kubenswrapper[5109]: I0219 00:22:03.342784 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nwpft" podStartSLOduration=5.512224333 podStartE2EDuration="6.342759655s" podCreationTimestamp="2026-02-19 00:21:57 +0000 UTC" firstStartedPulling="2026-02-19 00:22:00.260848151 +0000 UTC m=+750.097088150" lastFinishedPulling="2026-02-19 00:22:01.091383443 +0000 UTC m=+750.927623472" observedRunningTime="2026-02-19 00:22:03.337155489 +0000 UTC m=+753.173395478" watchObservedRunningTime="2026-02-19 00:22:03.342759655 +0000 UTC m=+753.178999644" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.574257 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641071 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-cghjb"] Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641734 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="14a5ab9a-a49c-43fd-855c-a409b8c60e2c" containerName="oc" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641751 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a5ab9a-a49c-43fd-855c-a409b8c60e2c" containerName="oc" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641776 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="util" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641782 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="util" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641805 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="pull" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641810 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="pull" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641817 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="extract" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641821 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="extract" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641908 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="19f51d62-ca7e-40d4-9aa3-1a53dc412fea" containerName="extract" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.641919 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="14a5ab9a-a49c-43fd-855c-a409b8c60e2c" containerName="oc" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.647297 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.650097 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-rswgq\"" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.652973 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-cghjb"] Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.703850 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p57vb\" (UniqueName: \"kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb\") pod \"14a5ab9a-a49c-43fd-855c-a409b8c60e2c\" (UID: \"14a5ab9a-a49c-43fd-855c-a409b8c60e2c\") " Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.704157 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-runner\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.704189 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjsq\" (UniqueName: \"kubernetes.io/projected/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-kube-api-access-cnjsq\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.710171 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb" (OuterVolumeSpecName: "kube-api-access-p57vb") pod "14a5ab9a-a49c-43fd-855c-a409b8c60e2c" (UID: "14a5ab9a-a49c-43fd-855c-a409b8c60e2c"). InnerVolumeSpecName "kube-api-access-p57vb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.805678 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-runner\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.805737 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjsq\" (UniqueName: \"kubernetes.io/projected/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-kube-api-access-cnjsq\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.805849 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p57vb\" (UniqueName: \"kubernetes.io/projected/14a5ab9a-a49c-43fd-855c-a409b8c60e2c-kube-api-access-p57vb\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.806292 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-runner\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.823297 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjsq\" (UniqueName: \"kubernetes.io/projected/7836af4d-7c84-45ae-af6c-cd9f6edcc7fa-kube-api-access-cnjsq\") pod \"service-telemetry-operator-794b5697c7-cghjb\" (UID: \"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:04 crc kubenswrapper[5109]: I0219 00:22:04.964831 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.162330 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-cghjb"] Feb 19 00:22:05 crc kubenswrapper[5109]: W0219 00:22:05.168750 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7836af4d_7c84_45ae_af6c_cd9f6edcc7fa.slice/crio-a7186db9a8a9afc6ad0f56fc9e13eb7a28d13ce896ba7e9db269865fd4da7635 WatchSource:0}: Error finding container a7186db9a8a9afc6ad0f56fc9e13eb7a28d13ce896ba7e9db269865fd4da7635: Status 404 returned error can't find the container with id a7186db9a8a9afc6ad0f56fc9e13eb7a28d13ce896ba7e9db269865fd4da7635 Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.325312 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" event={"ID":"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa","Type":"ContainerStarted","Data":"a7186db9a8a9afc6ad0f56fc9e13eb7a28d13ce896ba7e9db269865fd4da7635"} Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.327412 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" event={"ID":"14a5ab9a-a49c-43fd-855c-a409b8c60e2c","Type":"ContainerDied","Data":"a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b"} Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.327450 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91b2991bec60e19ef9df9075cb3d8a8231e579a1a19ef0e664f67355fdb7c4b" Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.327488 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524342-j7jg7" Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.648158 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-tg2rm"] Feb 19 00:22:05 crc kubenswrapper[5109]: I0219 00:22:05.654168 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524336-tg2rm"] Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.055680 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ttvh6"] Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.060603 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.063339 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-vtxpn\"" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.064392 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ttvh6"] Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.121341 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhrj\" (UniqueName: \"kubernetes.io/projected/0ae4cb0e-31cd-4928-8944-e8edfeb950e4-kube-api-access-grhrj\") pod \"interconnect-operator-78b9bd8798-ttvh6\" (UID: \"0ae4cb0e-31cd-4928-8944-e8edfeb950e4\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.222518 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-grhrj\" (UniqueName: \"kubernetes.io/projected/0ae4cb0e-31cd-4928-8944-e8edfeb950e4-kube-api-access-grhrj\") pod \"interconnect-operator-78b9bd8798-ttvh6\" (UID: \"0ae4cb0e-31cd-4928-8944-e8edfeb950e4\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.256689 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhrj\" (UniqueName: \"kubernetes.io/projected/0ae4cb0e-31cd-4928-8944-e8edfeb950e4-kube-api-access-grhrj\") pod \"interconnect-operator-78b9bd8798-ttvh6\" (UID: \"0ae4cb0e-31cd-4928-8944-e8edfeb950e4\") " pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.378894 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.612940 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-ttvh6"] Feb 19 00:22:06 crc kubenswrapper[5109]: W0219 00:22:06.622543 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae4cb0e_31cd_4928_8944_e8edfeb950e4.slice/crio-790e807a72f40f667506951d3d25483e4b30dcb20890777de54a01c1b643208e WatchSource:0}: Error finding container 790e807a72f40f667506951d3d25483e4b30dcb20890777de54a01c1b643208e: Status 404 returned error can't find the container with id 790e807a72f40f667506951d3d25483e4b30dcb20890777de54a01c1b643208e Feb 19 00:22:06 crc kubenswrapper[5109]: I0219 00:22:06.999461 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095c765b-bd19-495f-a5d2-60abe52b0ee8" path="/var/lib/kubelet/pods/095c765b-bd19-495f-a5d2-60abe52b0ee8/volumes" Feb 19 00:22:07 crc kubenswrapper[5109]: I0219 00:22:07.340429 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" event={"ID":"0ae4cb0e-31cd-4928-8944-e8edfeb950e4","Type":"ContainerStarted","Data":"790e807a72f40f667506951d3d25483e4b30dcb20890777de54a01c1b643208e"} Feb 19 00:22:08 crc kubenswrapper[5109]: I0219 00:22:08.248917 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:08 crc kubenswrapper[5109]: I0219 00:22:08.249246 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:08 crc kubenswrapper[5109]: I0219 00:22:08.295115 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:08 crc kubenswrapper[5109]: I0219 00:22:08.379752 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:11 crc kubenswrapper[5109]: I0219 00:22:11.373992 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" event={"ID":"7836af4d-7c84-45ae-af6c-cd9f6edcc7fa","Type":"ContainerStarted","Data":"b54d555cdd4e18de687c1e2d2056a65f87fe2d9d7576290ecdedb426393296bf"} Feb 19 00:22:11 crc kubenswrapper[5109]: I0219 00:22:11.393624 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-cghjb" podStartSLOduration=1.525222205 podStartE2EDuration="7.393606257s" podCreationTimestamp="2026-02-19 00:22:04 +0000 UTC" firstStartedPulling="2026-02-19 00:22:05.170299638 +0000 UTC m=+755.006539627" lastFinishedPulling="2026-02-19 00:22:11.03868367 +0000 UTC m=+760.874923679" observedRunningTime="2026-02-19 00:22:11.393183716 +0000 UTC m=+761.229423705" watchObservedRunningTime="2026-02-19 00:22:11.393606257 +0000 UTC m=+761.229846236" Feb 19 00:22:11 crc kubenswrapper[5109]: I0219 00:22:11.515455 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:22:11 crc kubenswrapper[5109]: I0219 00:22:11.515934 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nwpft" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="registry-server" containerID="cri-o://9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19" gracePeriod=2 Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.364506 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.391793 5109 generic.go:358] "Generic (PLEG): container finished" podID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerID="9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19" exitCode=0 Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.392876 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nwpft" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.393162 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerDied","Data":"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19"} Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.393200 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nwpft" event={"ID":"a6c36f1c-6a11-4867-a0b7-3b9f60510b87","Type":"ContainerDied","Data":"7974e461709d4790eb78421bef335b797ef56c4fde8fa80f620ce3d2651ebcdf"} Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.393216 5109 scope.go:117] "RemoveContainer" containerID="9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.517219 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5bfb\" (UniqueName: \"kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb\") pod \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.517268 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content\") pod \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.517362 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities\") pod \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\" (UID: \"a6c36f1c-6a11-4867-a0b7-3b9f60510b87\") " Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.520132 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities" (OuterVolumeSpecName: "utilities") pod "a6c36f1c-6a11-4867-a0b7-3b9f60510b87" (UID: "a6c36f1c-6a11-4867-a0b7-3b9f60510b87"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.535421 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb" (OuterVolumeSpecName: "kube-api-access-s5bfb") pod "a6c36f1c-6a11-4867-a0b7-3b9f60510b87" (UID: "a6c36f1c-6a11-4867-a0b7-3b9f60510b87"). InnerVolumeSpecName "kube-api-access-s5bfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.618712 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s5bfb\" (UniqueName: \"kubernetes.io/projected/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-kube-api-access-s5bfb\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.618751 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.630672 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a6c36f1c-6a11-4867-a0b7-3b9f60510b87" (UID: "a6c36f1c-6a11-4867-a0b7-3b9f60510b87"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.720252 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a6c36f1c-6a11-4867-a0b7-3b9f60510b87-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.726043 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:22:12 crc kubenswrapper[5109]: I0219 00:22:12.731428 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nwpft"] Feb 19 00:22:13 crc kubenswrapper[5109]: I0219 00:22:13.000402 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" path="/var/lib/kubelet/pods/a6c36f1c-6a11-4867-a0b7-3b9f60510b87/volumes" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.617311 5109 scope.go:117] "RemoveContainer" containerID="25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.670787 5109 scope.go:117] "RemoveContainer" containerID="7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.685047 5109 scope.go:117] "RemoveContainer" containerID="9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19" Feb 19 00:22:15 crc kubenswrapper[5109]: E0219 00:22:15.685452 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19\": container with ID starting with 9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19 not found: ID does not exist" containerID="9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.685484 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19"} err="failed to get container status \"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19\": rpc error: code = NotFound desc = could not find container \"9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19\": container with ID starting with 9e436185d541efdc0ef467e24d2847dd6e6a6302efc0e660953762809d0d1a19 not found: ID does not exist" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.685504 5109 scope.go:117] "RemoveContainer" containerID="25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2" Feb 19 00:22:15 crc kubenswrapper[5109]: E0219 00:22:15.686111 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2\": container with ID starting with 25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2 not found: ID does not exist" containerID="25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.686193 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2"} err="failed to get container status \"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2\": rpc error: code = NotFound desc = could not find container \"25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2\": container with ID starting with 25cebad59a7654a6903b83471e046ab91353dfc9587ff64570ed0d791f2360a2 not found: ID does not exist" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.686227 5109 scope.go:117] "RemoveContainer" containerID="7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef" Feb 19 00:22:15 crc kubenswrapper[5109]: E0219 00:22:15.686570 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef\": container with ID starting with 7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef not found: ID does not exist" containerID="7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef" Feb 19 00:22:15 crc kubenswrapper[5109]: I0219 00:22:15.686604 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef"} err="failed to get container status \"7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef\": rpc error: code = NotFound desc = could not find container \"7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef\": container with ID starting with 7d8feab1de939fdcb64a6c4fdad3ad6864a27f34ad93a9492d1c76dfec1b74ef not found: ID does not exist" Feb 19 00:22:16 crc kubenswrapper[5109]: I0219 00:22:16.418938 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" event={"ID":"0ae4cb0e-31cd-4928-8944-e8edfeb950e4","Type":"ContainerStarted","Data":"570b8c23726c7d2ce0570d8f6ac1955ce6f87256bf00c6bc706ed19b0c77bacd"} Feb 19 00:22:16 crc kubenswrapper[5109]: I0219 00:22:16.431106 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-ttvh6" podStartSLOduration=1.359607528 podStartE2EDuration="10.431089508s" podCreationTimestamp="2026-02-19 00:22:06 +0000 UTC" firstStartedPulling="2026-02-19 00:22:06.624790571 +0000 UTC m=+756.461030560" lastFinishedPulling="2026-02-19 00:22:15.696272551 +0000 UTC m=+765.532512540" observedRunningTime="2026-02-19 00:22:16.429547315 +0000 UTC m=+766.265787304" watchObservedRunningTime="2026-02-19 00:22:16.431089508 +0000 UTC m=+766.267329487" Feb 19 00:22:18 crc kubenswrapper[5109]: I0219 00:22:18.289882 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:22:18 crc kubenswrapper[5109]: I0219 00:22:18.289976 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.871115 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872527 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="registry-server" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872548 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="registry-server" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872577 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="extract-content" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872585 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="extract-content" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872612 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="extract-utilities" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872621 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="extract-utilities" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.872758 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="a6c36f1c-6a11-4867-a0b7-3b9f60510b87" containerName="registry-server" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.897565 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.897726 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.899524 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.903469 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.903713 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.903763 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.903805 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-wvbsn\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.905912 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.903829 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991558 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991625 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991678 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t7td\" (UniqueName: \"kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991708 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991736 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991779 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:31 crc kubenswrapper[5109]: I0219 00:22:31.991802 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.093885 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.093983 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8t7td\" (UniqueName: \"kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.094045 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.094099 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.094223 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.094280 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.094440 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.097080 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.101295 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.107139 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.108012 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.108065 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.117088 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.117991 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t7td\" (UniqueName: \"kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td\") pod \"default-interconnect-55bf8d5cb-spmzk\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.216588 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:22:32 crc kubenswrapper[5109]: I0219 00:22:32.630214 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:22:33 crc kubenswrapper[5109]: I0219 00:22:33.547416 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" event={"ID":"2ba8363a-1060-409a-8f60-eb79a78c4054","Type":"ContainerStarted","Data":"3a81487379848cc627c2f47a93f8a4fa59b43bc276c432936eb98ffe83c7b85d"} Feb 19 00:22:35 crc kubenswrapper[5109]: I0219 00:22:35.287655 5109 scope.go:117] "RemoveContainer" containerID="8e4b01fcf0a2c53a5946ca2505a369101f0565dcf6a7855a5cc18721e85a4e47" Feb 19 00:22:37 crc kubenswrapper[5109]: I0219 00:22:37.579667 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" event={"ID":"2ba8363a-1060-409a-8f60-eb79a78c4054","Type":"ContainerStarted","Data":"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e"} Feb 19 00:22:37 crc kubenswrapper[5109]: I0219 00:22:37.609147 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" podStartSLOduration=2.072539672 podStartE2EDuration="6.60911364s" podCreationTimestamp="2026-02-19 00:22:31 +0000 UTC" firstStartedPulling="2026-02-19 00:22:32.639409704 +0000 UTC m=+782.475649703" lastFinishedPulling="2026-02-19 00:22:37.175983682 +0000 UTC m=+787.012223671" observedRunningTime="2026-02-19 00:22:37.596790645 +0000 UTC m=+787.433030674" watchObservedRunningTime="2026-02-19 00:22:37.60911364 +0000 UTC m=+787.445353709" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.856250 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.866036 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.869686 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.870093 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.872221 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.872298 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.872455 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.872303 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.872723 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.873210 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.873422 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-cd6s7\"" Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.876891 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:22:41 crc kubenswrapper[5109]: I0219 00:22:41.879151 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034351 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034409 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034438 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034460 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01c6aa79-2623-4589-89eb-4e7170e2edd4-config-out\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034618 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-tls-assets\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034719 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-web-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034749 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034791 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034822 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvg4\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-kube-api-access-rfvg4\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034859 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034901 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.034963 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136299 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136397 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136478 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136513 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01c6aa79-2623-4589-89eb-4e7170e2edd4-config-out\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136609 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-tls-assets\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.136861 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-web-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138431 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138545 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138602 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138710 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvg4\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-kube-api-access-rfvg4\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138785 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.138909 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.139024 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.139474 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: E0219 00:22:42.139853 5109 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.139868 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: E0219 00:22:42.139957 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls podName:01c6aa79-2623-4589-89eb-4e7170e2edd4 nodeName:}" failed. No retries permitted until 2026-02-19 00:22:42.639932577 +0000 UTC m=+792.476172566 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "01c6aa79-2623-4589-89eb-4e7170e2edd4") : secret "default-prometheus-proxy-tls" not found Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.141170 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/01c6aa79-2623-4589-89eb-4e7170e2edd4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.146106 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.146812 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.146859 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/15a64de7555a1c3b450dbdb40a88cd7858e2f92f7c8199a53541c7bf2f50a29b/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.147095 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.147524 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/01c6aa79-2623-4589-89eb-4e7170e2edd4-config-out\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.153952 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-web-config\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.160966 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-tls-assets\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.174522 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvg4\" (UniqueName: \"kubernetes.io/projected/01c6aa79-2623-4589-89eb-4e7170e2edd4-kube-api-access-rfvg4\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.183015 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-47cf4daa-c78c-46b0-9cf0-1862a4f133d2\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: I0219 00:22:42.646924 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:42 crc kubenswrapper[5109]: E0219 00:22:42.647144 5109 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 19 00:22:42 crc kubenswrapper[5109]: E0219 00:22:42.647601 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls podName:01c6aa79-2623-4589-89eb-4e7170e2edd4 nodeName:}" failed. No retries permitted until 2026-02-19 00:22:43.6475645 +0000 UTC m=+793.483804529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "01c6aa79-2623-4589-89eb-4e7170e2edd4") : secret "default-prometheus-proxy-tls" not found Feb 19 00:22:43 crc kubenswrapper[5109]: I0219 00:22:43.663537 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:43 crc kubenswrapper[5109]: I0219 00:22:43.669269 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/01c6aa79-2623-4589-89eb-4e7170e2edd4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"01c6aa79-2623-4589-89eb-4e7170e2edd4\") " pod="service-telemetry/prometheus-default-0" Feb 19 00:22:43 crc kubenswrapper[5109]: I0219 00:22:43.692925 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 19 00:22:44 crc kubenswrapper[5109]: I0219 00:22:44.172605 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 19 00:22:44 crc kubenswrapper[5109]: W0219 00:22:44.183387 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01c6aa79_2623_4589_89eb_4e7170e2edd4.slice/crio-49f984eeb66aca58039092d75a38662d2e439585fb302dbd3497d302ff8fede2 WatchSource:0}: Error finding container 49f984eeb66aca58039092d75a38662d2e439585fb302dbd3497d302ff8fede2: Status 404 returned error can't find the container with id 49f984eeb66aca58039092d75a38662d2e439585fb302dbd3497d302ff8fede2 Feb 19 00:22:44 crc kubenswrapper[5109]: I0219 00:22:44.638364 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerStarted","Data":"49f984eeb66aca58039092d75a38662d2e439585fb302dbd3497d302ff8fede2"} Feb 19 00:22:48 crc kubenswrapper[5109]: I0219 00:22:48.289568 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:22:48 crc kubenswrapper[5109]: I0219 00:22:48.290481 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:22:48 crc kubenswrapper[5109]: I0219 00:22:48.667403 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerStarted","Data":"e84e540752cc87aa0366a6cd4595bc61903c6fff96d540691b55f6a332c336f3"} Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.519576 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-689mq"] Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.526848 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.531520 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-689mq"] Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.581850 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppnh\" (UniqueName: \"kubernetes.io/projected/b977cac1-63c2-4f60-b999-c3ca20fb5bc7-kube-api-access-dppnh\") pod \"default-snmp-webhook-6774d8dfbc-689mq\" (UID: \"b977cac1-63c2-4f60-b999-c3ca20fb5bc7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.683839 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dppnh\" (UniqueName: \"kubernetes.io/projected/b977cac1-63c2-4f60-b999-c3ca20fb5bc7-kube-api-access-dppnh\") pod \"default-snmp-webhook-6774d8dfbc-689mq\" (UID: \"b977cac1-63c2-4f60-b999-c3ca20fb5bc7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.703563 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppnh\" (UniqueName: \"kubernetes.io/projected/b977cac1-63c2-4f60-b999-c3ca20fb5bc7-kube-api-access-dppnh\") pod \"default-snmp-webhook-6774d8dfbc-689mq\" (UID: \"b977cac1-63c2-4f60-b999-c3ca20fb5bc7\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" Feb 19 00:22:51 crc kubenswrapper[5109]: I0219 00:22:51.898791 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" Feb 19 00:22:52 crc kubenswrapper[5109]: I0219 00:22:52.144363 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-689mq"] Feb 19 00:22:52 crc kubenswrapper[5109]: W0219 00:22:52.146832 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb977cac1_63c2_4f60_b999_c3ca20fb5bc7.slice/crio-ef87f1b47feaf1fadf6d8104cca178746bb89b66963969e0578985c8793a9b7d WatchSource:0}: Error finding container ef87f1b47feaf1fadf6d8104cca178746bb89b66963969e0578985c8793a9b7d: Status 404 returned error can't find the container with id ef87f1b47feaf1fadf6d8104cca178746bb89b66963969e0578985c8793a9b7d Feb 19 00:22:52 crc kubenswrapper[5109]: I0219 00:22:52.700861 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" event={"ID":"b977cac1-63c2-4f60-b999-c3ca20fb5bc7","Type":"ContainerStarted","Data":"ef87f1b47feaf1fadf6d8104cca178746bb89b66963969e0578985c8793a9b7d"} Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.330595 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.338830 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.339095 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.342293 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.342355 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.342581 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.342305 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.344204 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.345206 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-zctdt\"" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436499 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-web-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436585 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-out\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436645 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-df37b5e2-f538-4a6c-8335-37d64034e412\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df37b5e2-f538-4a6c-8335-37d64034e412\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436690 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwkfk\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-kube-api-access-pwkfk\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436898 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-tls-assets\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.436943 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.437094 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.437139 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-volume\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.437236 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538509 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538603 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-web-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538675 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-out\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538704 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-df37b5e2-f538-4a6c-8335-37d64034e412\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df37b5e2-f538-4a6c-8335-37d64034e412\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538728 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pwkfk\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-kube-api-access-pwkfk\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: E0219 00:22:55.538727 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538782 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-tls-assets\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: E0219 00:22:55.538812 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls podName:62739e79-bc0a-4ec9-a8fb-a667a70621e5 nodeName:}" failed. No retries permitted until 2026-02-19 00:22:56.038793076 +0000 UTC m=+805.875033065 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "62739e79-bc0a-4ec9-a8fb-a667a70621e5") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538839 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538952 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.538977 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-volume\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.540892 5109 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.540931 5109 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-df37b5e2-f538-4a6c-8335-37d64034e412\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df37b5e2-f538-4a6c-8335-37d64034e412\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ea8debfa69bc6c7f012ce6e424f4b8b7c02c0e1a106fb2a9dcc523cf852e8357/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.552653 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.553309 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.555058 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-volume\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.555730 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/62739e79-bc0a-4ec9-a8fb-a667a70621e5-config-out\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.557604 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-tls-assets\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.558209 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-web-config\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.561273 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwkfk\" (UniqueName: \"kubernetes.io/projected/62739e79-bc0a-4ec9-a8fb-a667a70621e5-kube-api-access-pwkfk\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.580673 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-df37b5e2-f538-4a6c-8335-37d64034e412\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-df37b5e2-f538-4a6c-8335-37d64034e412\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.719427 5109 generic.go:358] "Generic (PLEG): container finished" podID="01c6aa79-2623-4589-89eb-4e7170e2edd4" containerID="e84e540752cc87aa0366a6cd4595bc61903c6fff96d540691b55f6a332c336f3" exitCode=0 Feb 19 00:22:55 crc kubenswrapper[5109]: I0219 00:22:55.719621 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerDied","Data":"e84e540752cc87aa0366a6cd4595bc61903c6fff96d540691b55f6a332c336f3"} Feb 19 00:22:56 crc kubenswrapper[5109]: I0219 00:22:56.046923 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:56 crc kubenswrapper[5109]: E0219 00:22:56.047121 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:56 crc kubenswrapper[5109]: E0219 00:22:56.047183 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls podName:62739e79-bc0a-4ec9-a8fb-a667a70621e5 nodeName:}" failed. No retries permitted until 2026-02-19 00:22:57.047166231 +0000 UTC m=+806.883406230 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "62739e79-bc0a-4ec9-a8fb-a667a70621e5") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:57 crc kubenswrapper[5109]: I0219 00:22:57.062832 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:57 crc kubenswrapper[5109]: E0219 00:22:57.063218 5109 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:57 crc kubenswrapper[5109]: E0219 00:22:57.063272 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls podName:62739e79-bc0a-4ec9-a8fb-a667a70621e5 nodeName:}" failed. No retries permitted until 2026-02-19 00:22:59.063258482 +0000 UTC m=+808.899498471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "62739e79-bc0a-4ec9-a8fb-a667a70621e5") : secret "default-alertmanager-proxy-tls" not found Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.100869 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.107354 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/62739e79-bc0a-4ec9-a8fb-a667a70621e5-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"62739e79-bc0a-4ec9-a8fb-a667a70621e5\") " pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.257227 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.753506 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" event={"ID":"b977cac1-63c2-4f60-b999-c3ca20fb5bc7","Type":"ContainerStarted","Data":"01ca70dfa2878d5b9eaf8fa0daad89c8c8d29773fa1f92e6e8ac8b43fdf6b2dc"} Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.771239 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-689mq" podStartSLOduration=1.806231063 podStartE2EDuration="8.771220768s" podCreationTimestamp="2026-02-19 00:22:51 +0000 UTC" firstStartedPulling="2026-02-19 00:22:52.148911477 +0000 UTC m=+801.985151466" lastFinishedPulling="2026-02-19 00:22:59.113901172 +0000 UTC m=+808.950141171" observedRunningTime="2026-02-19 00:22:59.767825903 +0000 UTC m=+809.604065892" watchObservedRunningTime="2026-02-19 00:22:59.771220768 +0000 UTC m=+809.607460777" Feb 19 00:22:59 crc kubenswrapper[5109]: I0219 00:22:59.939856 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 19 00:22:59 crc kubenswrapper[5109]: W0219 00:22:59.943333 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62739e79_bc0a_4ec9_a8fb_a667a70621e5.slice/crio-1df221a8b71f1cabf2dfdee2d110b4449738c939a58d50079f1b266273843e86 WatchSource:0}: Error finding container 1df221a8b71f1cabf2dfdee2d110b4449738c939a58d50079f1b266273843e86: Status 404 returned error can't find the container with id 1df221a8b71f1cabf2dfdee2d110b4449738c939a58d50079f1b266273843e86 Feb 19 00:23:01 crc kubenswrapper[5109]: I0219 00:23:01.446368 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerStarted","Data":"1df221a8b71f1cabf2dfdee2d110b4449738c939a58d50079f1b266273843e86"} Feb 19 00:23:02 crc kubenswrapper[5109]: I0219 00:23:02.432072 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerStarted","Data":"4c606e214304e8592935cb2353ca2fe0e477cbd11248fab7bc676e96166f0010"} Feb 19 00:23:03 crc kubenswrapper[5109]: I0219 00:23:03.440990 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerStarted","Data":"f218cae6f12f31fbba132a88b6b8fcb8b74284f3d742ba4d1911274a1b38ddd5"} Feb 19 00:23:05 crc kubenswrapper[5109]: I0219 00:23:05.461312 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerStarted","Data":"231d23637a88ec67b1f947d11e64ff08af7fa69ebb12f8950d53a6e96da29644"} Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.609239 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9"] Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.899577 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9"] Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.899609 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.902606 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.902707 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.902803 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-kppjp\"" Feb 19 00:23:07 crc kubenswrapper[5109]: I0219 00:23:07.902971 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.001506 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/52838cf3-d3af-4769-b402-60663fda6d46-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.001548 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs6ck\" (UniqueName: \"kubernetes.io/projected/52838cf3-d3af-4769-b402-60663fda6d46-kube-api-access-zs6ck\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.001763 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/52838cf3-d3af-4769-b402-60663fda6d46-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.002105 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.002197 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.103284 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/52838cf3-d3af-4769-b402-60663fda6d46-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.103356 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.103387 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.103611 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/52838cf3-d3af-4769-b402-60663fda6d46-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.103676 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zs6ck\" (UniqueName: \"kubernetes.io/projected/52838cf3-d3af-4769-b402-60663fda6d46-kube-api-access-zs6ck\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.104515 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/52838cf3-d3af-4769-b402-60663fda6d46-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.104793 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/52838cf3-d3af-4769-b402-60663fda6d46-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: E0219 00:23:08.105586 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:08 crc kubenswrapper[5109]: E0219 00:23:08.105657 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls podName:52838cf3-d3af-4769-b402-60663fda6d46 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:08.605642837 +0000 UTC m=+818.441882826 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" (UID: "52838cf3-d3af-4769-b402-60663fda6d46") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.113868 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.120929 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs6ck\" (UniqueName: \"kubernetes.io/projected/52838cf3-d3af-4769-b402-60663fda6d46-kube-api-access-zs6ck\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: I0219 00:23:08.610503 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:08 crc kubenswrapper[5109]: E0219 00:23:08.610669 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:08 crc kubenswrapper[5109]: E0219 00:23:08.610757 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls podName:52838cf3-d3af-4769-b402-60663fda6d46 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:09.61073834 +0000 UTC m=+819.446978329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" (UID: "52838cf3-d3af-4769-b402-60663fda6d46") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 19 00:23:09 crc kubenswrapper[5109]: I0219 00:23:09.494542 5109 generic.go:358] "Generic (PLEG): container finished" podID="62739e79-bc0a-4ec9-a8fb-a667a70621e5" containerID="4c606e214304e8592935cb2353ca2fe0e477cbd11248fab7bc676e96166f0010" exitCode=0 Feb 19 00:23:09 crc kubenswrapper[5109]: I0219 00:23:09.494607 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerDied","Data":"4c606e214304e8592935cb2353ca2fe0e477cbd11248fab7bc676e96166f0010"} Feb 19 00:23:09 crc kubenswrapper[5109]: I0219 00:23:09.625539 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:09 crc kubenswrapper[5109]: I0219 00:23:09.631594 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/52838cf3-d3af-4769-b402-60663fda6d46-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-cp5j9\" (UID: \"52838cf3-d3af-4769-b402-60663fda6d46\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:09 crc kubenswrapper[5109]: I0219 00:23:09.720763 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.836958 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97"] Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.850275 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97"] Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.850449 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.852251 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.853406 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.947298 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.947414 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65l98\" (UniqueName: \"kubernetes.io/projected/c2d1ace0-e174-4538-8038-bef4c5ba338e-kube-api-access-65l98\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.947445 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2d1ace0-e174-4538-8038-bef4c5ba338e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.947490 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:10 crc kubenswrapper[5109]: I0219 00:23:10.947526 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2d1ace0-e174-4538-8038-bef4c5ba338e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.049141 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.049293 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2d1ace0-e174-4538-8038-bef4c5ba338e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.049410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.050714 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-65l98\" (UniqueName: \"kubernetes.io/projected/c2d1ace0-e174-4538-8038-bef4c5ba338e-kube-api-access-65l98\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.050836 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2d1ace0-e174-4538-8038-bef4c5ba338e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.051669 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2d1ace0-e174-4538-8038-bef4c5ba338e-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: E0219 00:23:11.053147 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:11 crc kubenswrapper[5109]: E0219 00:23:11.053225 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls podName:c2d1ace0-e174-4538-8038-bef4c5ba338e nodeName:}" failed. No retries permitted until 2026-02-19 00:23:11.553204431 +0000 UTC m=+821.389444510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" (UID: "c2d1ace0-e174-4538-8038-bef4c5ba338e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.053920 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2d1ace0-e174-4538-8038-bef4c5ba338e-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.058476 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.061360 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9"] Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.068480 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-65l98\" (UniqueName: \"kubernetes.io/projected/c2d1ace0-e174-4538-8038-bef4c5ba338e-kube-api-access-65l98\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.524723 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"01c6aa79-2623-4589-89eb-4e7170e2edd4","Type":"ContainerStarted","Data":"4588d14c06fc4fa460c19af9b85ab428f2763a90fb85bd899287fdeb71fe29f0"} Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.557117 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.070095553 podStartE2EDuration="31.557099871s" podCreationTimestamp="2026-02-19 00:22:40 +0000 UTC" firstStartedPulling="2026-02-19 00:22:44.189133476 +0000 UTC m=+794.025373465" lastFinishedPulling="2026-02-19 00:23:10.676137794 +0000 UTC m=+820.512377783" observedRunningTime="2026-02-19 00:23:11.550764413 +0000 UTC m=+821.387004442" watchObservedRunningTime="2026-02-19 00:23:11.557099871 +0000 UTC m=+821.393339850" Feb 19 00:23:11 crc kubenswrapper[5109]: I0219 00:23:11.557960 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:11 crc kubenswrapper[5109]: E0219 00:23:11.558146 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:11 crc kubenswrapper[5109]: E0219 00:23:11.558196 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls podName:c2d1ace0-e174-4538-8038-bef4c5ba338e nodeName:}" failed. No retries permitted until 2026-02-19 00:23:12.558185381 +0000 UTC m=+822.394425370 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" (UID: "c2d1ace0-e174-4538-8038-bef4c5ba338e") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 19 00:23:11 crc kubenswrapper[5109]: W0219 00:23:11.568097 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52838cf3_d3af_4769_b402_60663fda6d46.slice/crio-36f5b3995c21963c0f1ea6345bf3274775e4842c8ffc13ce8eeaa34cceaa0644 WatchSource:0}: Error finding container 36f5b3995c21963c0f1ea6345bf3274775e4842c8ffc13ce8eeaa34cceaa0644: Status 404 returned error can't find the container with id 36f5b3995c21963c0f1ea6345bf3274775e4842c8ffc13ce8eeaa34cceaa0644 Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.534103 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"2c51f835c24b9bd7b99cd81dd4835d96251f187ab360b128b30663b9c82e93b2"} Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.534411 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"36f5b3995c21963c0f1ea6345bf3274775e4842c8ffc13ce8eeaa34cceaa0644"} Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.537524 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerStarted","Data":"15386a8568123f87320b3abdcea46795983e6d484c0071ea8d4fc216a1a2eb4a"} Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.581506 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.585351 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c2d1ace0-e174-4538-8038-bef4c5ba338e-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97\" (UID: \"c2d1ace0-e174-4538-8038-bef4c5ba338e\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:12 crc kubenswrapper[5109]: I0219 00:23:12.669628 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" Feb 19 00:23:13 crc kubenswrapper[5109]: I0219 00:23:13.126456 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97"] Feb 19 00:23:13 crc kubenswrapper[5109]: I0219 00:23:13.544766 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"09e17d9adfe92411a62e6ebac2497a83a9d70c71b12bec99e4b839070bb97ee0"} Feb 19 00:23:13 crc kubenswrapper[5109]: I0219 00:23:13.694404 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:13 crc kubenswrapper[5109]: I0219 00:23:13.694456 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:13 crc kubenswrapper[5109]: I0219 00:23:13.736291 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.169051 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6"] Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.175555 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.177208 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.177937 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.180926 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6"] Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.315351 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.315396 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.315440 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ltrs\" (UniqueName: \"kubernetes.io/projected/09a4f0cf-0742-4cf8-9687-1718b399b321-kube-api-access-2ltrs\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.315497 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/09a4f0cf-0742-4cf8-9687-1718b399b321-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.315522 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/09a4f0cf-0742-4cf8-9687-1718b399b321-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.416827 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/09a4f0cf-0742-4cf8-9687-1718b399b321-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.416869 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/09a4f0cf-0742-4cf8-9687-1718b399b321-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.416933 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.416953 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.417005 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2ltrs\" (UniqueName: \"kubernetes.io/projected/09a4f0cf-0742-4cf8-9687-1718b399b321-kube-api-access-2ltrs\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: E0219 00:23:14.417326 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.417391 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/09a4f0cf-0742-4cf8-9687-1718b399b321-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: E0219 00:23:14.417429 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls podName:09a4f0cf-0742-4cf8-9687-1718b399b321 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:14.917395982 +0000 UTC m=+824.753635971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" (UID: "09a4f0cf-0742-4cf8-9687-1718b399b321") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.417796 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/09a4f0cf-0742-4cf8-9687-1718b399b321-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.423151 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.449193 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ltrs\" (UniqueName: \"kubernetes.io/projected/09a4f0cf-0742-4cf8-9687-1718b399b321-kube-api-access-2ltrs\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.554256 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerStarted","Data":"ceae6cafc13258218e1c12dd0fd922ad624f70a4823b8750161a502e691e05e2"} Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.556092 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"51850433048e2ae90d110db3c93313688ec37472f9e3a676f0a0f1fbf7417a9a"} Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.598248 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 19 00:23:14 crc kubenswrapper[5109]: I0219 00:23:14.934716 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:14 crc kubenswrapper[5109]: E0219 00:23:14.935152 5109 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:14 crc kubenswrapper[5109]: E0219 00:23:14.935307 5109 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls podName:09a4f0cf-0742-4cf8-9687-1718b399b321 nodeName:}" failed. No retries permitted until 2026-02-19 00:23:15.935285613 +0000 UTC m=+825.771525602 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" (UID: "09a4f0cf-0742-4cf8-9687-1718b399b321") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 19 00:23:15 crc kubenswrapper[5109]: I0219 00:23:15.564963 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"62739e79-bc0a-4ec9-a8fb-a667a70621e5","Type":"ContainerStarted","Data":"5f3beb6f9470cb64675a3f18e3570fc9b9603185c1cff3933e4eee8614f06bfb"} Feb 19 00:23:15 crc kubenswrapper[5109]: I0219 00:23:15.951962 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:15 crc kubenswrapper[5109]: I0219 00:23:15.957378 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/09a4f0cf-0742-4cf8-9687-1718b399b321-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6\" (UID: \"09a4f0cf-0742-4cf8-9687-1718b399b321\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:15 crc kubenswrapper[5109]: I0219 00:23:15.992965 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.289047 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.289393 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.289449 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.290134 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.290236 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e" gracePeriod=600 Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.589066 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e" exitCode=0 Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.589180 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e"} Feb 19 00:23:18 crc kubenswrapper[5109]: I0219 00:23:18.589262 5109 scope.go:117] "RemoveContainer" containerID="980745c41d10b113c0972af8c3ad9b792bfea4ea750ae9f895dcfa1fb03c43ba" Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.536568 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.550266582 podStartE2EDuration="25.536544812s" podCreationTimestamp="2026-02-19 00:22:54 +0000 UTC" firstStartedPulling="2026-02-19 00:23:09.49604262 +0000 UTC m=+819.332282609" lastFinishedPulling="2026-02-19 00:23:14.48232085 +0000 UTC m=+824.318560839" observedRunningTime="2026-02-19 00:23:15.593292178 +0000 UTC m=+825.429532187" watchObservedRunningTime="2026-02-19 00:23:19.536544812 +0000 UTC m=+829.372784801" Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.541503 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6"] Feb 19 00:23:19 crc kubenswrapper[5109]: W0219 00:23:19.547764 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09a4f0cf_0742_4cf8_9687_1718b399b321.slice/crio-c4b6dcb9627c9313031e4961feb3c052c80e96c7180cfd4d0d024bc0a70d423b WatchSource:0}: Error finding container c4b6dcb9627c9313031e4961feb3c052c80e96c7180cfd4d0d024bc0a70d423b: Status 404 returned error can't find the container with id c4b6dcb9627c9313031e4961feb3c052c80e96c7180cfd4d0d024bc0a70d423b Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.597979 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"bcbbeb81819dcb0dc7a801c53398c26b3c0c5636fb9ac16ae5324167a0cff799"} Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.600009 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"c4b6dcb9627c9313031e4961feb3c052c80e96c7180cfd4d0d024bc0a70d423b"} Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.605933 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668"} Feb 19 00:23:19 crc kubenswrapper[5109]: I0219 00:23:19.608222 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"7a4fc2e040458b527b3be0eeb2dd1cdff5c6d50cb27c61dcf121d19e58ada338"} Feb 19 00:23:20 crc kubenswrapper[5109]: I0219 00:23:20.618002 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"8f5de58173a8a6adf0b78116fa6608f4acf8cb92d723156afc1cb1f0e1d9d952"} Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.305692 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85"] Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.318396 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85"] Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.318539 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.323963 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.324294 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.443177 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.443221 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp9lp\" (UniqueName: \"kubernetes.io/projected/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-kube-api-access-zp9lp\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.443308 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.443366 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.544688 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.544731 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zp9lp\" (UniqueName: \"kubernetes.io/projected/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-kube-api-access-zp9lp\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.544804 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.544845 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.545548 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.546551 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.554427 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.564036 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp9lp\" (UniqueName: \"kubernetes.io/projected/53dee7d4-1233-4e93-b0e5-89b35ef19b4a-kube-api-access-zp9lp\") pod \"default-cloud1-coll-event-smartgateway-6bbc884464-llp85\" (UID: \"53dee7d4-1233-4e93-b0e5-89b35ef19b4a\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:21 crc kubenswrapper[5109]: I0219 00:23:21.649349 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.324879 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2"] Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.624597 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2"] Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.624786 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.628038 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.761708 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.761846 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.761915 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.761950 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m7cq\" (UniqueName: \"kubernetes.io/projected/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-kube-api-access-2m7cq\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.863018 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.863742 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.863892 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2m7cq\" (UniqueName: \"kubernetes.io/projected/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-kube-api-access-2m7cq\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.863973 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.864019 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.865123 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.873506 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.888912 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m7cq\" (UniqueName: \"kubernetes.io/projected/b7a60315-59d0-4fd9-8a9e-4ecb38a8c926-kube-api-access-2m7cq\") pod \"default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2\" (UID: \"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:22 crc kubenswrapper[5109]: I0219 00:23:22.943749 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" Feb 19 00:23:31 crc kubenswrapper[5109]: I0219 00:23:31.718329 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"794aa90cecced8025e829c74f70550ae24bc85668b69585a65554fc33a037c43"} Feb 19 00:23:31 crc kubenswrapper[5109]: I0219 00:23:31.737139 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2"] Feb 19 00:23:31 crc kubenswrapper[5109]: W0219 00:23:31.738679 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7a60315_59d0_4fd9_8a9e_4ecb38a8c926.slice/crio-66c7b5bbc8d18b9643bdc303a6313f0623678baafd1a0373833ea6334437b6f1 WatchSource:0}: Error finding container 66c7b5bbc8d18b9643bdc303a6313f0623678baafd1a0373833ea6334437b6f1: Status 404 returned error can't find the container with id 66c7b5bbc8d18b9643bdc303a6313f0623678baafd1a0373833ea6334437b6f1 Feb 19 00:23:31 crc kubenswrapper[5109]: I0219 00:23:31.771396 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85"] Feb 19 00:23:32 crc kubenswrapper[5109]: I0219 00:23:32.729869 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" event={"ID":"53dee7d4-1233-4e93-b0e5-89b35ef19b4a","Type":"ContainerStarted","Data":"544c6577b4bd963b0848dc1d364144cde2315958582d57b88fe2b795ffe96ed7"} Feb 19 00:23:32 crc kubenswrapper[5109]: I0219 00:23:32.731588 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" event={"ID":"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926","Type":"ContainerStarted","Data":"66c7b5bbc8d18b9643bdc303a6313f0623678baafd1a0373833ea6334437b6f1"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.739716 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" event={"ID":"53dee7d4-1233-4e93-b0e5-89b35ef19b4a","Type":"ContainerStarted","Data":"799184d285edf2c7d67ef9571a0d909e9cd172e8ee7d48c7b31b5bbaa7f37096"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.740107 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" event={"ID":"53dee7d4-1233-4e93-b0e5-89b35ef19b4a","Type":"ContainerStarted","Data":"f10c985f829ab4134136ed8ef5c75c5ea1f6e2df17a3148c4468a12bb765476c"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.744416 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" event={"ID":"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926","Type":"ContainerStarted","Data":"7623c0a601d28bb1084a1e278f1fba73d877f0efe3c4396e94cd8b836e31a5b6"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.744668 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" event={"ID":"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926","Type":"ContainerStarted","Data":"c63123f36f1c604724b6d744b68a5c8fcfe5c257c4f3f15f9fe44cafdbee6dfd"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.746782 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"2abc34e464a59b977a0ab4ebef708b47f6a1f15bb0bfa59d32b0747a3dec4dcf"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.748500 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"a56d012f8e8faae2b1a6d5e56fdd9963e06e65d76170fbf85695b0ac6d4b1fdc"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.750185 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"dc0965550dad9f8b36ff0f7af2154db69e3cf6e3d10ae2e2432cee1d89f32331"} Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.778458 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" podStartSLOduration=11.237251113 podStartE2EDuration="12.778434917s" podCreationTimestamp="2026-02-19 00:23:21 +0000 UTC" firstStartedPulling="2026-02-19 00:23:31.763093946 +0000 UTC m=+841.599333945" lastFinishedPulling="2026-02-19 00:23:33.30427775 +0000 UTC m=+843.140517749" observedRunningTime="2026-02-19 00:23:33.758612742 +0000 UTC m=+843.594852731" watchObservedRunningTime="2026-02-19 00:23:33.778434917 +0000 UTC m=+843.614674916" Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.791807 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" podStartSLOduration=6.481117292 podStartE2EDuration="19.791782911s" podCreationTimestamp="2026-02-19 00:23:14 +0000 UTC" firstStartedPulling="2026-02-19 00:23:19.549706651 +0000 UTC m=+829.385946630" lastFinishedPulling="2026-02-19 00:23:32.86037226 +0000 UTC m=+842.696612249" observedRunningTime="2026-02-19 00:23:33.781569685 +0000 UTC m=+843.617809694" watchObservedRunningTime="2026-02-19 00:23:33.791782911 +0000 UTC m=+843.628022920" Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.818901 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" podStartSLOduration=5.722137923 podStartE2EDuration="26.818870249s" podCreationTimestamp="2026-02-19 00:23:07 +0000 UTC" firstStartedPulling="2026-02-19 00:23:11.570098645 +0000 UTC m=+821.406338634" lastFinishedPulling="2026-02-19 00:23:32.666830961 +0000 UTC m=+842.503070960" observedRunningTime="2026-02-19 00:23:33.80567465 +0000 UTC m=+843.641914629" watchObservedRunningTime="2026-02-19 00:23:33.818870249 +0000 UTC m=+843.655110258" Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.836023 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" podStartSLOduration=4.314943873 podStartE2EDuration="23.835998769s" podCreationTimestamp="2026-02-19 00:23:10 +0000 UTC" firstStartedPulling="2026-02-19 00:23:13.138200833 +0000 UTC m=+822.974440822" lastFinishedPulling="2026-02-19 00:23:32.659255719 +0000 UTC m=+842.495495718" observedRunningTime="2026-02-19 00:23:33.829444075 +0000 UTC m=+843.665684064" watchObservedRunningTime="2026-02-19 00:23:33.835998769 +0000 UTC m=+843.672238758" Feb 19 00:23:33 crc kubenswrapper[5109]: I0219 00:23:33.846397 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" podStartSLOduration=10.182369146 podStartE2EDuration="11.84638315s" podCreationTimestamp="2026-02-19 00:23:22 +0000 UTC" firstStartedPulling="2026-02-19 00:23:31.741035658 +0000 UTC m=+841.577275647" lastFinishedPulling="2026-02-19 00:23:33.405049652 +0000 UTC m=+843.241289651" observedRunningTime="2026-02-19 00:23:33.845815334 +0000 UTC m=+843.682055323" watchObservedRunningTime="2026-02-19 00:23:33.84638315 +0000 UTC m=+843.682623139" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.317836 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.318270 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" podUID="2ba8363a-1060-409a-8f60-eb79a78c4054" containerName="default-interconnect" containerID="cri-o://1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e" gracePeriod=30 Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.709341 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.740789 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-z8dc7"] Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.741663 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ba8363a-1060-409a-8f60-eb79a78c4054" containerName="default-interconnect" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.741682 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba8363a-1060-409a-8f60-eb79a78c4054" containerName="default-interconnect" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.741858 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ba8363a-1060-409a-8f60-eb79a78c4054" containerName="default-interconnect" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.748205 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.757235 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-z8dc7"] Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.759027 5109 generic.go:358] "Generic (PLEG): container finished" podID="2ba8363a-1060-409a-8f60-eb79a78c4054" containerID="1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e" exitCode=0 Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.759240 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.759432 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" event={"ID":"2ba8363a-1060-409a-8f60-eb79a78c4054","Type":"ContainerDied","Data":"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e"} Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.759469 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-spmzk" event={"ID":"2ba8363a-1060-409a-8f60-eb79a78c4054","Type":"ContainerDied","Data":"3a81487379848cc627c2f47a93f8a4fa59b43bc276c432936eb98ffe83c7b85d"} Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.759747 5109 scope.go:117] "RemoveContainer" containerID="1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.771713 5109 generic.go:358] "Generic (PLEG): container finished" podID="09a4f0cf-0742-4cf8-9687-1718b399b321" containerID="794aa90cecced8025e829c74f70550ae24bc85668b69585a65554fc33a037c43" exitCode=0 Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.771872 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerDied","Data":"794aa90cecced8025e829c74f70550ae24bc85668b69585a65554fc33a037c43"} Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.777016 5109 generic.go:358] "Generic (PLEG): container finished" podID="52838cf3-d3af-4769-b402-60663fda6d46" containerID="7a4fc2e040458b527b3be0eeb2dd1cdff5c6d50cb27c61dcf121d19e58ada338" exitCode=0 Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.777131 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerDied","Data":"7a4fc2e040458b527b3be0eeb2dd1cdff5c6d50cb27c61dcf121d19e58ada338"} Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.777747 5109 scope.go:117] "RemoveContainer" containerID="7a4fc2e040458b527b3be0eeb2dd1cdff5c6d50cb27c61dcf121d19e58ada338" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.778826 5109 scope.go:117] "RemoveContainer" containerID="794aa90cecced8025e829c74f70550ae24bc85668b69585a65554fc33a037c43" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.785641 5109 generic.go:358] "Generic (PLEG): container finished" podID="c2d1ace0-e174-4538-8038-bef4c5ba338e" containerID="bcbbeb81819dcb0dc7a801c53398c26b3c0c5636fb9ac16ae5324167a0cff799" exitCode=0 Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.786526 5109 scope.go:117] "RemoveContainer" containerID="bcbbeb81819dcb0dc7a801c53398c26b3c0c5636fb9ac16ae5324167a0cff799" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.786690 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerDied","Data":"bcbbeb81819dcb0dc7a801c53398c26b3c0c5636fb9ac16ae5324167a0cff799"} Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.790763 5109 scope.go:117] "RemoveContainer" containerID="1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e" Feb 19 00:23:34 crc kubenswrapper[5109]: E0219 00:23:34.793993 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e\": container with ID starting with 1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e not found: ID does not exist" containerID="1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.794027 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e"} err="failed to get container status \"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e\": rpc error: code = NotFound desc = could not find container \"1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e\": container with ID starting with 1791d7c05d9a2541bcd722aa02da7db8a9fe9c1cb834aa3558fadeeb0a7cad3e not found: ID does not exist" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855108 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855163 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855255 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855290 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t7td\" (UniqueName: \"kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855308 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855351 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855422 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users\") pod \"2ba8363a-1060-409a-8f60-eb79a78c4054\" (UID: \"2ba8363a-1060-409a-8f60-eb79a78c4054\") " Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855708 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855737 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmdrd\" (UniqueName: \"kubernetes.io/projected/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-kube-api-access-cmdrd\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855804 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.855825 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.856105 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.856184 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-users\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.856216 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-config\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.858090 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.862790 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.862845 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td" (OuterVolumeSpecName: "kube-api-access-8t7td") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "kube-api-access-8t7td". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.862854 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.864740 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.865120 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.869916 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "2ba8363a-1060-409a-8f60-eb79a78c4054" (UID: "2ba8363a-1060-409a-8f60-eb79a78c4054"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957453 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957504 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-users\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957528 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-config\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957603 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957624 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cmdrd\" (UniqueName: \"kubernetes.io/projected/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-kube-api-access-cmdrd\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957674 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957693 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957757 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957769 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957779 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957788 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8t7td\" (UniqueName: \"kubernetes.io/projected/2ba8363a-1060-409a-8f60-eb79a78c4054-kube-api-access-8t7td\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957801 5109 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957811 5109 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.957821 5109 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/2ba8363a-1060-409a-8f60-eb79a78c4054-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.958982 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-config\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.961595 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.963246 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.963645 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.967261 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.967380 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-sasl-users\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:34 crc kubenswrapper[5109]: I0219 00:23:34.973724 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmdrd\" (UniqueName: \"kubernetes.io/projected/78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0-kube-api-access-cmdrd\") pod \"default-interconnect-55bf8d5cb-z8dc7\" (UID: \"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0\") " pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.075094 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.069786 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.083851 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-spmzk"] Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.477745 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-z8dc7"] Feb 19 00:23:35 crc kubenswrapper[5109]: W0219 00:23:35.483224 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78aa9fcf_a6f5_4fee_af77_f741bd0f1ee0.slice/crio-4c65e34216dd15c3197b4c9965ddf974e3e816d82ab5e00d84e4fa5462557eee WatchSource:0}: Error finding container 4c65e34216dd15c3197b4c9965ddf974e3e816d82ab5e00d84e4fa5462557eee: Status 404 returned error can't find the container with id 4c65e34216dd15c3197b4c9965ddf974e3e816d82ab5e00d84e4fa5462557eee Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.795206 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.797882 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.800192 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.801735 5109 generic.go:358] "Generic (PLEG): container finished" podID="53dee7d4-1233-4e93-b0e5-89b35ef19b4a" containerID="f10c985f829ab4134136ed8ef5c75c5ea1f6e2df17a3148c4468a12bb765476c" exitCode=0 Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.801820 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" event={"ID":"53dee7d4-1233-4e93-b0e5-89b35ef19b4a","Type":"ContainerDied","Data":"f10c985f829ab4134136ed8ef5c75c5ea1f6e2df17a3148c4468a12bb765476c"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.802184 5109 scope.go:117] "RemoveContainer" containerID="f10c985f829ab4134136ed8ef5c75c5ea1f6e2df17a3148c4468a12bb765476c" Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.804096 5109 generic.go:358] "Generic (PLEG): container finished" podID="b7a60315-59d0-4fd9-8a9e-4ecb38a8c926" containerID="c63123f36f1c604724b6d744b68a5c8fcfe5c257c4f3f15f9fe44cafdbee6dfd" exitCode=0 Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.804143 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" event={"ID":"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926","Type":"ContainerDied","Data":"c63123f36f1c604724b6d744b68a5c8fcfe5c257c4f3f15f9fe44cafdbee6dfd"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.804456 5109 scope.go:117] "RemoveContainer" containerID="c63123f36f1c604724b6d744b68a5c8fcfe5c257c4f3f15f9fe44cafdbee6dfd" Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.806543 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" event={"ID":"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0","Type":"ContainerStarted","Data":"5309dc9985bdcdd950d8ebdb395fc61f33a8eb95bd3fd4e1c360e9a86dba47f2"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.806569 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" event={"ID":"78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0","Type":"ContainerStarted","Data":"4c65e34216dd15c3197b4c9965ddf974e3e816d82ab5e00d84e4fa5462557eee"} Feb 19 00:23:35 crc kubenswrapper[5109]: I0219 00:23:35.956853 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-z8dc7" podStartSLOduration=1.956826685 podStartE2EDuration="1.956826685s" podCreationTimestamp="2026-02-19 00:23:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:23:35.949002388 +0000 UTC m=+845.785242377" watchObservedRunningTime="2026-02-19 00:23:35.956826685 +0000 UTC m=+845.793066674" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.816383 5109 generic.go:358] "Generic (PLEG): container finished" podID="52838cf3-d3af-4769-b402-60663fda6d46" containerID="08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c" exitCode=0 Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.816479 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerDied","Data":"08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c"} Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.816768 5109 scope.go:117] "RemoveContainer" containerID="7a4fc2e040458b527b3be0eeb2dd1cdff5c6d50cb27c61dcf121d19e58ada338" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.817173 5109 scope.go:117] "RemoveContainer" containerID="08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c" Feb 19 00:23:36 crc kubenswrapper[5109]: E0219 00:23:36.817465 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-cp5j9_service-telemetry(52838cf3-d3af-4769-b402-60663fda6d46)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" podUID="52838cf3-d3af-4769-b402-60663fda6d46" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.822863 5109 generic.go:358] "Generic (PLEG): container finished" podID="c2d1ace0-e174-4538-8038-bef4c5ba338e" containerID="fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb" exitCode=0 Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.823319 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerDied","Data":"fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb"} Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.823921 5109 scope.go:117] "RemoveContainer" containerID="fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb" Feb 19 00:23:36 crc kubenswrapper[5109]: E0219 00:23:36.824307 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97_service-telemetry(c2d1ace0-e174-4538-8038-bef4c5ba338e)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" podUID="c2d1ace0-e174-4538-8038-bef4c5ba338e" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.832643 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-6bbc884464-llp85" event={"ID":"53dee7d4-1233-4e93-b0e5-89b35ef19b4a","Type":"ContainerStarted","Data":"30c20bb40d98599ffdc7b2964251fa5b7c0e6f74d69b246e269ab8ea2492c542"} Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.839993 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2" event={"ID":"b7a60315-59d0-4fd9-8a9e-4ecb38a8c926","Type":"ContainerStarted","Data":"8121f44eb77df301ae8933fce53c793671dd39408427903fc956a27df7749e50"} Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.850839 5109 generic.go:358] "Generic (PLEG): container finished" podID="09a4f0cf-0742-4cf8-9687-1718b399b321" containerID="811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56" exitCode=0 Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.850927 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerDied","Data":"811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56"} Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.851925 5109 scope.go:117] "RemoveContainer" containerID="811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56" Feb 19 00:23:36 crc kubenswrapper[5109]: E0219 00:23:36.852334 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6_service-telemetry(09a4f0cf-0742-4cf8-9687-1718b399b321)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" podUID="09a4f0cf-0742-4cf8-9687-1718b399b321" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.875881 5109 scope.go:117] "RemoveContainer" containerID="bcbbeb81819dcb0dc7a801c53398c26b3c0c5636fb9ac16ae5324167a0cff799" Feb 19 00:23:36 crc kubenswrapper[5109]: I0219 00:23:36.942135 5109 scope.go:117] "RemoveContainer" containerID="794aa90cecced8025e829c74f70550ae24bc85668b69585a65554fc33a037c43" Feb 19 00:23:37 crc kubenswrapper[5109]: I0219 00:23:37.000239 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba8363a-1060-409a-8f60-eb79a78c4054" path="/var/lib/kubelet/pods/2ba8363a-1060-409a-8f60-eb79a78c4054/volumes" Feb 19 00:23:37 crc kubenswrapper[5109]: I0219 00:23:37.861340 5109 scope.go:117] "RemoveContainer" containerID="08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c" Feb 19 00:23:37 crc kubenswrapper[5109]: E0219 00:23:37.861808 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-cp5j9_service-telemetry(52838cf3-d3af-4769-b402-60663fda6d46)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" podUID="52838cf3-d3af-4769-b402-60663fda6d46" Feb 19 00:23:37 crc kubenswrapper[5109]: I0219 00:23:37.865524 5109 scope.go:117] "RemoveContainer" containerID="fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb" Feb 19 00:23:37 crc kubenswrapper[5109]: E0219 00:23:37.865815 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97_service-telemetry(c2d1ace0-e174-4538-8038-bef4c5ba338e)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" podUID="c2d1ace0-e174-4538-8038-bef4c5ba338e" Feb 19 00:23:37 crc kubenswrapper[5109]: I0219 00:23:37.874609 5109 scope.go:117] "RemoveContainer" containerID="811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56" Feb 19 00:23:37 crc kubenswrapper[5109]: E0219 00:23:37.874876 5109 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6_service-telemetry(09a4f0cf-0742-4cf8-9687-1718b399b321)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" podUID="09a4f0cf-0742-4cf8-9687-1718b399b321" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.673428 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.681823 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.684967 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.685173 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.687271 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.787660 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/44531e3b-2fc7-438c-b280-716c81d528ea-qdr-test-config\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.788041 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/44531e3b-2fc7-438c-b280-716c81d528ea-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.788171 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk48t\" (UniqueName: \"kubernetes.io/projected/44531e3b-2fc7-438c-b280-716c81d528ea-kube-api-access-lk48t\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.889165 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lk48t\" (UniqueName: \"kubernetes.io/projected/44531e3b-2fc7-438c-b280-716c81d528ea-kube-api-access-lk48t\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.889303 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/44531e3b-2fc7-438c-b280-716c81d528ea-qdr-test-config\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.889340 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/44531e3b-2fc7-438c-b280-716c81d528ea-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.890342 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/44531e3b-2fc7-438c-b280-716c81d528ea-qdr-test-config\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.906197 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/44531e3b-2fc7-438c-b280-716c81d528ea-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:43 crc kubenswrapper[5109]: I0219 00:23:43.908750 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk48t\" (UniqueName: \"kubernetes.io/projected/44531e3b-2fc7-438c-b280-716c81d528ea-kube-api-access-lk48t\") pod \"qdr-test\" (UID: \"44531e3b-2fc7-438c-b280-716c81d528ea\") " pod="service-telemetry/qdr-test" Feb 19 00:23:44 crc kubenswrapper[5109]: I0219 00:23:44.008340 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 19 00:23:44 crc kubenswrapper[5109]: I0219 00:23:44.481625 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 19 00:23:44 crc kubenswrapper[5109]: I0219 00:23:44.919625 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"44531e3b-2fc7-438c-b280-716c81d528ea","Type":"ContainerStarted","Data":"826666f1eb71960a1343a79ed978f7a5a32e8ca107bccaec432a9b295e2d345e"} Feb 19 00:23:48 crc kubenswrapper[5109]: I0219 00:23:48.990842 5109 scope.go:117] "RemoveContainer" containerID="fe871c3cafadb807ecfe7f886c9841071539a74349742d50fc04e877ede454cb" Feb 19 00:23:48 crc kubenswrapper[5109]: I0219 00:23:48.991126 5109 scope.go:117] "RemoveContainer" containerID="08114f55fd7381b70d79b50762b4f3a2e99565b6f8b591f1505c91b79b7f760c" Feb 19 00:23:51 crc kubenswrapper[5109]: I0219 00:23:51.008695 5109 scope.go:117] "RemoveContainer" containerID="811b607c121a7b3b647310700d0ce5ec8e99e21038f2d5f6ec6ddd9f5e920f56" Feb 19 00:23:51 crc kubenswrapper[5109]: I0219 00:23:51.995047 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6" event={"ID":"09a4f0cf-0742-4cf8-9687-1718b399b321","Type":"ContainerStarted","Data":"fd382e630da05303f8e7de29dbc92e244c58942d5cbd5457f15b529378ab8d45"} Feb 19 00:23:51 crc kubenswrapper[5109]: I0219 00:23:51.999411 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-cp5j9" event={"ID":"52838cf3-d3af-4769-b402-60663fda6d46","Type":"ContainerStarted","Data":"f7194e524125aa57eeef3528f6f5427a8a0810aa7a5f90f3cf9fe0d43b9f2050"} Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.003926 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97" event={"ID":"c2d1ace0-e174-4538-8038-bef4c5ba338e","Type":"ContainerStarted","Data":"ebddfdad4982876617cc1abf7e64058fe2febf81609b44ff2d629d850c41fb28"} Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.006766 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"44531e3b-2fc7-438c-b280-716c81d528ea","Type":"ContainerStarted","Data":"60dee13a2e210b56f3574d1d01dd3610736dcfca8ef37a9548c1cd6a8eab37e7"} Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.050564 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.362890535 podStartE2EDuration="9.050541007s" podCreationTimestamp="2026-02-19 00:23:43 +0000 UTC" firstStartedPulling="2026-02-19 00:23:44.494209325 +0000 UTC m=+854.330449314" lastFinishedPulling="2026-02-19 00:23:51.181859787 +0000 UTC m=+861.018099786" observedRunningTime="2026-02-19 00:23:52.043909563 +0000 UTC m=+861.880149552" watchObservedRunningTime="2026-02-19 00:23:52.050541007 +0000 UTC m=+861.886780996" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.370870 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-k6cm2"] Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.377778 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-k6cm2"] Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.377915 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.380325 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.380557 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.381069 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.381500 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.381828 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.382056 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407246 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407313 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407347 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407378 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407438 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72b9g\" (UniqueName: \"kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407471 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.407509 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509288 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509339 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509383 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509429 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-72b9g\" (UniqueName: \"kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509673 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.509709 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.510473 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.510501 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.510700 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.510794 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.510970 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.511588 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.512341 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.531714 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-72b9g\" (UniqueName: \"kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g\") pod \"stf-smoketest-smoke1-k6cm2\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.701381 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.775602 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.783165 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.783315 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.816645 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7242\" (UniqueName: \"kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242\") pod \"curl\" (UID: \"509eadfa-c006-4853-a789-bf048235440c\") " pod="service-telemetry/curl" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.918593 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7242\" (UniqueName: \"kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242\") pod \"curl\" (UID: \"509eadfa-c006-4853-a789-bf048235440c\") " pod="service-telemetry/curl" Feb 19 00:23:52 crc kubenswrapper[5109]: I0219 00:23:52.936356 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7242\" (UniqueName: \"kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242\") pod \"curl\" (UID: \"509eadfa-c006-4853-a789-bf048235440c\") " pod="service-telemetry/curl" Feb 19 00:23:53 crc kubenswrapper[5109]: I0219 00:23:53.100605 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:23:53 crc kubenswrapper[5109]: I0219 00:23:53.158132 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-k6cm2"] Feb 19 00:23:53 crc kubenswrapper[5109]: W0219 00:23:53.164882 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7a5f2b7_d13c_4391_a180_cac85795537d.slice/crio-a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65 WatchSource:0}: Error finding container a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65: Status 404 returned error can't find the container with id a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65 Feb 19 00:23:53 crc kubenswrapper[5109]: I0219 00:23:53.530132 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 19 00:23:54 crc kubenswrapper[5109]: I0219 00:23:54.021104 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerStarted","Data":"a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65"} Feb 19 00:23:54 crc kubenswrapper[5109]: I0219 00:23:54.022832 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"509eadfa-c006-4853-a789-bf048235440c","Type":"ContainerStarted","Data":"9f6c4b01ed75cd6f3079f3a98d0717b4f5edc18e7db0b67370123d3682f406fc"} Feb 19 00:23:55 crc kubenswrapper[5109]: I0219 00:23:55.031047 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"509eadfa-c006-4853-a789-bf048235440c","Type":"ContainerStarted","Data":"079a875c8379d85a237a3f34b78aa3caf93a49ffbbad147cf9d1963af59f9c69"} Feb 19 00:23:55 crc kubenswrapper[5109]: I0219 00:23:55.046804 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/curl" podStartSLOduration=1.7351948240000001 podStartE2EDuration="3.046609131s" podCreationTimestamp="2026-02-19 00:23:52 +0000 UTC" firstStartedPulling="2026-02-19 00:23:53.540962023 +0000 UTC m=+863.377202012" lastFinishedPulling="2026-02-19 00:23:54.85237634 +0000 UTC m=+864.688616319" observedRunningTime="2026-02-19 00:23:55.044889853 +0000 UTC m=+864.881129872" watchObservedRunningTime="2026-02-19 00:23:55.046609131 +0000 UTC m=+864.882849140" Feb 19 00:23:56 crc kubenswrapper[5109]: I0219 00:23:56.042551 5109 generic.go:358] "Generic (PLEG): container finished" podID="509eadfa-c006-4853-a789-bf048235440c" containerID="079a875c8379d85a237a3f34b78aa3caf93a49ffbbad147cf9d1963af59f9c69" exitCode=0 Feb 19 00:23:56 crc kubenswrapper[5109]: I0219 00:23:56.043056 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"509eadfa-c006-4853-a789-bf048235440c","Type":"ContainerDied","Data":"079a875c8379d85a237a3f34b78aa3caf93a49ffbbad147cf9d1963af59f9c69"} Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.124348 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.196589 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7242\" (UniqueName: \"kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242\") pod \"509eadfa-c006-4853-a789-bf048235440c\" (UID: \"509eadfa-c006-4853-a789-bf048235440c\") " Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.203920 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242" (OuterVolumeSpecName: "kube-api-access-t7242") pod "509eadfa-c006-4853-a789-bf048235440c" (UID: "509eadfa-c006-4853-a789-bf048235440c"). InnerVolumeSpecName "kube-api-access-t7242". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.293399 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_509eadfa-c006-4853-a789-bf048235440c/curl/0.log" Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.298748 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7242\" (UniqueName: \"kubernetes.io/projected/509eadfa-c006-4853-a789-bf048235440c-kube-api-access-t7242\") on node \"crc\" DevicePath \"\"" Feb 19 00:23:58 crc kubenswrapper[5109]: I0219 00:23:58.560129 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-689mq_b977cac1-63c2-4f60-b999-c3ca20fb5bc7/prometheus-webhook-snmp/0.log" Feb 19 00:23:59 crc kubenswrapper[5109]: I0219 00:23:59.068843 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 19 00:23:59 crc kubenswrapper[5109]: I0219 00:23:59.068863 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"509eadfa-c006-4853-a789-bf048235440c","Type":"ContainerDied","Data":"9f6c4b01ed75cd6f3079f3a98d0717b4f5edc18e7db0b67370123d3682f406fc"} Feb 19 00:23:59 crc kubenswrapper[5109]: I0219 00:23:59.068899 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f6c4b01ed75cd6f3079f3a98d0717b4f5edc18e7db0b67370123d3682f406fc" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.132351 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524344-d4wqv"] Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.133267 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="509eadfa-c006-4853-a789-bf048235440c" containerName="curl" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.133285 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="509eadfa-c006-4853-a789-bf048235440c" containerName="curl" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.133474 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="509eadfa-c006-4853-a789-bf048235440c" containerName="curl" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.154688 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-d4wqv"] Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.154889 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.157614 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.158223 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.162250 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.222774 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94tpq\" (UniqueName: \"kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq\") pod \"auto-csr-approver-29524344-d4wqv\" (UID: \"5f698e68-6c95-41c2-a911-b81382b3b111\") " pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.324410 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-94tpq\" (UniqueName: \"kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq\") pod \"auto-csr-approver-29524344-d4wqv\" (UID: \"5f698e68-6c95-41c2-a911-b81382b3b111\") " pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.345592 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-94tpq\" (UniqueName: \"kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq\") pod \"auto-csr-approver-29524344-d4wqv\" (UID: \"5f698e68-6c95-41c2-a911-b81382b3b111\") " pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:00 crc kubenswrapper[5109]: I0219 00:24:00.636137 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:02 crc kubenswrapper[5109]: I0219 00:24:02.600834 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-d4wqv"] Feb 19 00:24:02 crc kubenswrapper[5109]: W0219 00:24:02.606192 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5f698e68_6c95_41c2_a911_b81382b3b111.slice/crio-915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc WatchSource:0}: Error finding container 915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc: Status 404 returned error can't find the container with id 915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc Feb 19 00:24:03 crc kubenswrapper[5109]: I0219 00:24:03.101458 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" event={"ID":"5f698e68-6c95-41c2-a911-b81382b3b111","Type":"ContainerStarted","Data":"915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc"} Feb 19 00:24:03 crc kubenswrapper[5109]: I0219 00:24:03.103814 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerStarted","Data":"ad2a9b18dc5aafd3c1242a307b8f84cecb7660aadc9f95d7baba13eea2e7ec13"} Feb 19 00:24:04 crc kubenswrapper[5109]: I0219 00:24:04.110603 5109 generic.go:358] "Generic (PLEG): container finished" podID="5f698e68-6c95-41c2-a911-b81382b3b111" containerID="2ce547874796683ed01486e087e55964c7e268e5d7757598f769079ce90f1732" exitCode=0 Feb 19 00:24:04 crc kubenswrapper[5109]: I0219 00:24:04.110674 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" event={"ID":"5f698e68-6c95-41c2-a911-b81382b3b111","Type":"ContainerDied","Data":"2ce547874796683ed01486e087e55964c7e268e5d7757598f769079ce90f1732"} Feb 19 00:24:07 crc kubenswrapper[5109]: I0219 00:24:07.483508 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:07 crc kubenswrapper[5109]: I0219 00:24:07.540612 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94tpq\" (UniqueName: \"kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq\") pod \"5f698e68-6c95-41c2-a911-b81382b3b111\" (UID: \"5f698e68-6c95-41c2-a911-b81382b3b111\") " Feb 19 00:24:07 crc kubenswrapper[5109]: I0219 00:24:07.545124 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq" (OuterVolumeSpecName: "kube-api-access-94tpq") pod "5f698e68-6c95-41c2-a911-b81382b3b111" (UID: "5f698e68-6c95-41c2-a911-b81382b3b111"). InnerVolumeSpecName "kube-api-access-94tpq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:24:07 crc kubenswrapper[5109]: I0219 00:24:07.642133 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94tpq\" (UniqueName: \"kubernetes.io/projected/5f698e68-6c95-41c2-a911-b81382b3b111-kube-api-access-94tpq\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.143897 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" event={"ID":"5f698e68-6c95-41c2-a911-b81382b3b111","Type":"ContainerDied","Data":"915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc"} Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.144449 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="915397ce0d3b7904c9f934dcade20ff992f9b625f9595c60de2ca01af77f95fc" Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.144707 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524344-d4wqv" Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.148180 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerStarted","Data":"e8125fead6a3de35ba0c025b5a3e4437f85b5f5811e275b273ca7a3cd19b80f2"} Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.174880 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" podStartSLOduration=1.854878915 podStartE2EDuration="16.174855568s" podCreationTimestamp="2026-02-19 00:23:52 +0000 UTC" firstStartedPulling="2026-02-19 00:23:53.168280919 +0000 UTC m=+863.004520908" lastFinishedPulling="2026-02-19 00:24:07.488257552 +0000 UTC m=+877.324497561" observedRunningTime="2026-02-19 00:24:08.167419272 +0000 UTC m=+878.003659261" watchObservedRunningTime="2026-02-19 00:24:08.174855568 +0000 UTC m=+878.011095567" Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.550419 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-l7qgq"] Feb 19 00:24:08 crc kubenswrapper[5109]: I0219 00:24:08.554772 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524338-l7qgq"] Feb 19 00:24:09 crc kubenswrapper[5109]: I0219 00:24:09.000686 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1043a162-8b5d-4bbb-a40a-0a0b1ee213d3" path="/var/lib/kubelet/pods/1043a162-8b5d-4bbb-a40a-0a0b1ee213d3/volumes" Feb 19 00:24:28 crc kubenswrapper[5109]: I0219 00:24:28.722685 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-689mq_b977cac1-63c2-4f60-b999-c3ca20fb5bc7/prometheus-webhook-snmp/0.log" Feb 19 00:24:31 crc kubenswrapper[5109]: I0219 00:24:31.365363 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:24:31 crc kubenswrapper[5109]: I0219 00:24:31.370117 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:24:31 crc kubenswrapper[5109]: I0219 00:24:31.379288 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:24:31 crc kubenswrapper[5109]: I0219 00:24:31.379325 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:24:37 crc kubenswrapper[5109]: I0219 00:24:37.200685 5109 scope.go:117] "RemoveContainer" containerID="0f9aaf70b6930c00f373e57b1be813dee1fd510a4ef2c906ecd0965c2a58bbfe" Feb 19 00:24:37 crc kubenswrapper[5109]: I0219 00:24:37.365123 5109 generic.go:358] "Generic (PLEG): container finished" podID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerID="ad2a9b18dc5aafd3c1242a307b8f84cecb7660aadc9f95d7baba13eea2e7ec13" exitCode=1 Feb 19 00:24:37 crc kubenswrapper[5109]: I0219 00:24:37.365188 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerDied","Data":"ad2a9b18dc5aafd3c1242a307b8f84cecb7660aadc9f95d7baba13eea2e7ec13"} Feb 19 00:24:37 crc kubenswrapper[5109]: I0219 00:24:37.365940 5109 scope.go:117] "RemoveContainer" containerID="ad2a9b18dc5aafd3c1242a307b8f84cecb7660aadc9f95d7baba13eea2e7ec13" Feb 19 00:24:39 crc kubenswrapper[5109]: I0219 00:24:39.387312 5109 generic.go:358] "Generic (PLEG): container finished" podID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerID="e8125fead6a3de35ba0c025b5a3e4437f85b5f5811e275b273ca7a3cd19b80f2" exitCode=0 Feb 19 00:24:39 crc kubenswrapper[5109]: I0219 00:24:39.387403 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerDied","Data":"e8125fead6a3de35ba0c025b5a3e4437f85b5f5811e275b273ca7a3cd19b80f2"} Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.747884 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.875745 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.875815 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.875847 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72b9g\" (UniqueName: \"kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.875922 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.875950 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.876000 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.876014 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config\") pod \"b7a5f2b7-d13c-4391-a180-cac85795537d\" (UID: \"b7a5f2b7-d13c-4391-a180-cac85795537d\") " Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.882715 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g" (OuterVolumeSpecName: "kube-api-access-72b9g") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "kube-api-access-72b9g". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.895354 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.899407 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.901518 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.902515 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.907786 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.918689 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "b7a5f2b7-d13c-4391-a180-cac85795537d" (UID: "b7a5f2b7-d13c-4391-a180-cac85795537d"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977666 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977724 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977736 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977744 5109 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977753 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977762 5109 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b7a5f2b7-d13c-4391-a180-cac85795537d-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:40 crc kubenswrapper[5109]: I0219 00:24:40.977773 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-72b9g\" (UniqueName: \"kubernetes.io/projected/b7a5f2b7-d13c-4391-a180-cac85795537d-kube-api-access-72b9g\") on node \"crc\" DevicePath \"\"" Feb 19 00:24:41 crc kubenswrapper[5109]: I0219 00:24:41.411655 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" event={"ID":"b7a5f2b7-d13c-4391-a180-cac85795537d","Type":"ContainerDied","Data":"a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65"} Feb 19 00:24:41 crc kubenswrapper[5109]: I0219 00:24:41.411705 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a28d6bba2ce851784371660a69bb379b005f6a6b4d9a8d11eea59e50f3fdea65" Feb 19 00:24:41 crc kubenswrapper[5109]: I0219 00:24:41.411722 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-k6cm2" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.038965 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f7tvw"] Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040231 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5f698e68-6c95-41c2-a911-b81382b3b111" containerName="oc" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040246 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f698e68-6c95-41c2-a911-b81382b3b111" containerName="oc" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040262 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-ceilometer" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040272 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-ceilometer" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040284 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-collectd" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040291 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-collectd" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040415 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="5f698e68-6c95-41c2-a911-b81382b3b111" containerName="oc" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040431 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-collectd" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.040440 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7a5f2b7-d13c-4391-a180-cac85795537d" containerName="smoketest-ceilometer" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.047141 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.049499 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.049909 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.050141 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.050216 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.053798 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f7tvw"] Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.053845 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.053844 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.110942 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111013 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111078 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111099 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111329 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111375 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.111428 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213126 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213221 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213324 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213371 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213512 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213565 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.213611 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.214441 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.214500 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.214561 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.215293 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.215390 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.215875 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.239806 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd\") pod \"stf-smoketest-smoke1-f7tvw\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.363999 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:24:49 crc kubenswrapper[5109]: I0219 00:24:49.806553 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-f7tvw"] Feb 19 00:24:50 crc kubenswrapper[5109]: I0219 00:24:50.491559 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerStarted","Data":"d5388efd7f8566f7689d23f0745cc4b15c3fdd64de0e970dc84449a9fa097b96"} Feb 19 00:24:50 crc kubenswrapper[5109]: I0219 00:24:50.491984 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerStarted","Data":"d5736cc98ca0a0743af5cc06ba68bb274aa47dd2ae743c4ca62e266c791453f9"} Feb 19 00:24:50 crc kubenswrapper[5109]: I0219 00:24:50.492012 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerStarted","Data":"f54360cf2615c23dbb5eb1f97e9873c310bdaa2a7fe9a2db666ff7ec30ed080b"} Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.564310 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" podStartSLOduration=19.564292021 podStartE2EDuration="19.564292021s" podCreationTimestamp="2026-02-19 00:24:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-19 00:24:50.519590832 +0000 UTC m=+920.355830851" watchObservedRunningTime="2026-02-19 00:25:08.564292021 +0000 UTC m=+938.400532010" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.569899 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.575728 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.588058 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.635808 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.635940 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwk85\" (UniqueName: \"kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.636118 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.737985 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.738048 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fwk85\" (UniqueName: \"kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.738137 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.738520 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.738580 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.761688 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwk85\" (UniqueName: \"kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85\") pod \"community-operators-7pnk5\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:08 crc kubenswrapper[5109]: I0219 00:25:08.913642 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:09 crc kubenswrapper[5109]: I0219 00:25:09.386880 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:09 crc kubenswrapper[5109]: W0219 00:25:09.392013 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68a12bdc_4a71_4dbe_be9b_654f28bec15a.slice/crio-5613f8fc663a72dc39662b68180a0aac4af9790cbef3bbcdbefa9a606088bf04 WatchSource:0}: Error finding container 5613f8fc663a72dc39662b68180a0aac4af9790cbef3bbcdbefa9a606088bf04: Status 404 returned error can't find the container with id 5613f8fc663a72dc39662b68180a0aac4af9790cbef3bbcdbefa9a606088bf04 Feb 19 00:25:09 crc kubenswrapper[5109]: I0219 00:25:09.394716 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:25:09 crc kubenswrapper[5109]: I0219 00:25:09.664044 5109 generic.go:358] "Generic (PLEG): container finished" podID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerID="bea847cf8a71478e704b49eeacda0ac349de0c546ac278a8cb358dc1e5b4f6c3" exitCode=0 Feb 19 00:25:09 crc kubenswrapper[5109]: I0219 00:25:09.664104 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerDied","Data":"bea847cf8a71478e704b49eeacda0ac349de0c546ac278a8cb358dc1e5b4f6c3"} Feb 19 00:25:09 crc kubenswrapper[5109]: I0219 00:25:09.664372 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerStarted","Data":"5613f8fc663a72dc39662b68180a0aac4af9790cbef3bbcdbefa9a606088bf04"} Feb 19 00:25:11 crc kubenswrapper[5109]: I0219 00:25:11.681625 5109 generic.go:358] "Generic (PLEG): container finished" podID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerID="ba7ce54741dd60e40376a492812202ca4192bdc8ae6b8437b89f4b074ce5253a" exitCode=0 Feb 19 00:25:11 crc kubenswrapper[5109]: I0219 00:25:11.681819 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerDied","Data":"ba7ce54741dd60e40376a492812202ca4192bdc8ae6b8437b89f4b074ce5253a"} Feb 19 00:25:12 crc kubenswrapper[5109]: I0219 00:25:12.694282 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerStarted","Data":"536f6c7e58b4c7dc27d7e443d15658582dcc6a57f7107e0c057c903357d41b94"} Feb 19 00:25:12 crc kubenswrapper[5109]: I0219 00:25:12.717008 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7pnk5" podStartSLOduration=3.766504082 podStartE2EDuration="4.716981445s" podCreationTimestamp="2026-02-19 00:25:08 +0000 UTC" firstStartedPulling="2026-02-19 00:25:09.665240427 +0000 UTC m=+939.501480456" lastFinishedPulling="2026-02-19 00:25:10.61571782 +0000 UTC m=+940.451957819" observedRunningTime="2026-02-19 00:25:12.712590422 +0000 UTC m=+942.548830411" watchObservedRunningTime="2026-02-19 00:25:12.716981445 +0000 UTC m=+942.553221454" Feb 19 00:25:18 crc kubenswrapper[5109]: I0219 00:25:18.914186 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:18 crc kubenswrapper[5109]: I0219 00:25:18.914620 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:19 crc kubenswrapper[5109]: I0219 00:25:19.005957 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:19 crc kubenswrapper[5109]: I0219 00:25:19.824315 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:19 crc kubenswrapper[5109]: I0219 00:25:19.906517 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:21 crc kubenswrapper[5109]: I0219 00:25:21.775018 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7pnk5" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="registry-server" containerID="cri-o://536f6c7e58b4c7dc27d7e443d15658582dcc6a57f7107e0c057c903357d41b94" gracePeriod=2 Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.786091 5109 generic.go:358] "Generic (PLEG): container finished" podID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerID="536f6c7e58b4c7dc27d7e443d15658582dcc6a57f7107e0c057c903357d41b94" exitCode=0 Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.786133 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerDied","Data":"536f6c7e58b4c7dc27d7e443d15658582dcc6a57f7107e0c057c903357d41b94"} Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.789473 5109 generic.go:358] "Generic (PLEG): container finished" podID="afa373db-1095-44d6-adae-43ff762418fb" containerID="d5388efd7f8566f7689d23f0745cc4b15c3fdd64de0e970dc84449a9fa097b96" exitCode=0 Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.789510 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerDied","Data":"d5388efd7f8566f7689d23f0745cc4b15c3fdd64de0e970dc84449a9fa097b96"} Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.793772 5109 scope.go:117] "RemoveContainer" containerID="d5388efd7f8566f7689d23f0745cc4b15c3fdd64de0e970dc84449a9fa097b96" Feb 19 00:25:22 crc kubenswrapper[5109]: I0219 00:25:22.925754 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.087579 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content\") pod \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.087710 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities\") pod \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.087769 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwk85\" (UniqueName: \"kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85\") pod \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\" (UID: \"68a12bdc-4a71-4dbe-be9b-654f28bec15a\") " Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.089000 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities" (OuterVolumeSpecName: "utilities") pod "68a12bdc-4a71-4dbe-be9b-654f28bec15a" (UID: "68a12bdc-4a71-4dbe-be9b-654f28bec15a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.099453 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85" (OuterVolumeSpecName: "kube-api-access-fwk85") pod "68a12bdc-4a71-4dbe-be9b-654f28bec15a" (UID: "68a12bdc-4a71-4dbe-be9b-654f28bec15a"). InnerVolumeSpecName "kube-api-access-fwk85". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.168832 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "68a12bdc-4a71-4dbe-be9b-654f28bec15a" (UID: "68a12bdc-4a71-4dbe-be9b-654f28bec15a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.189240 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.189280 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/68a12bdc-4a71-4dbe-be9b-654f28bec15a-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.189295 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fwk85\" (UniqueName: \"kubernetes.io/projected/68a12bdc-4a71-4dbe-be9b-654f28bec15a-kube-api-access-fwk85\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.810777 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7pnk5" event={"ID":"68a12bdc-4a71-4dbe-be9b-654f28bec15a","Type":"ContainerDied","Data":"5613f8fc663a72dc39662b68180a0aac4af9790cbef3bbcdbefa9a606088bf04"} Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.810851 5109 scope.go:117] "RemoveContainer" containerID="536f6c7e58b4c7dc27d7e443d15658582dcc6a57f7107e0c057c903357d41b94" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.810905 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7pnk5" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.815022 5109 generic.go:358] "Generic (PLEG): container finished" podID="afa373db-1095-44d6-adae-43ff762418fb" containerID="d5736cc98ca0a0743af5cc06ba68bb274aa47dd2ae743c4ca62e266c791453f9" exitCode=0 Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.815068 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerDied","Data":"d5736cc98ca0a0743af5cc06ba68bb274aa47dd2ae743c4ca62e266c791453f9"} Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.842439 5109 scope.go:117] "RemoveContainer" containerID="ba7ce54741dd60e40376a492812202ca4192bdc8ae6b8437b89f4b074ce5253a" Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.886440 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.899953 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7pnk5"] Feb 19 00:25:23 crc kubenswrapper[5109]: I0219 00:25:23.904868 5109 scope.go:117] "RemoveContainer" containerID="bea847cf8a71478e704b49eeacda0ac349de0c546ac278a8cb358dc1e5b4f6c3" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.006217 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" path="/var/lib/kubelet/pods/68a12bdc-4a71-4dbe-be9b-654f28bec15a/volumes" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.252392 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330539 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330595 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330720 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330791 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330876 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330934 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.330967 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script\") pod \"afa373db-1095-44d6-adae-43ff762418fb\" (UID: \"afa373db-1095-44d6-adae-43ff762418fb\") " Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.337667 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd" (OuterVolumeSpecName: "kube-api-access-xn9xd") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "kube-api-access-xn9xd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.354072 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.354551 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.357253 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.358551 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.361752 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.363088 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "afa373db-1095-44d6-adae-43ff762418fb" (UID: "afa373db-1095-44d6-adae-43ff762418fb"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432804 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xn9xd\" (UniqueName: \"kubernetes.io/projected/afa373db-1095-44d6-adae-43ff762418fb-kube-api-access-xn9xd\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432838 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432847 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432856 5109 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432866 5109 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432874 5109 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.432882 5109 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/afa373db-1095-44d6-adae-43ff762418fb-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.849889 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" event={"ID":"afa373db-1095-44d6-adae-43ff762418fb","Type":"ContainerDied","Data":"f54360cf2615c23dbb5eb1f97e9873c310bdaa2a7fe9a2db666ff7ec30ed080b"} Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.849936 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-f7tvw" Feb 19 00:25:25 crc kubenswrapper[5109]: I0219 00:25:25.849957 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f54360cf2615c23dbb5eb1f97e9873c310bdaa2a7fe9a2db666ff7ec30ed080b" Feb 19 00:25:27 crc kubenswrapper[5109]: I0219 00:25:27.309054 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-f7tvw_afa373db-1095-44d6-adae-43ff762418fb/smoketest-collectd/0.log" Feb 19 00:25:27 crc kubenswrapper[5109]: I0219 00:25:27.596997 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-f7tvw_afa373db-1095-44d6-adae-43ff762418fb/smoketest-ceilometer/0.log" Feb 19 00:25:27 crc kubenswrapper[5109]: I0219 00:25:27.953626 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-z8dc7_78aa9fcf-a6f5-4fee-af77-f741bd0f1ee0/default-interconnect/0.log" Feb 19 00:25:28 crc kubenswrapper[5109]: I0219 00:25:28.259763 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-cp5j9_52838cf3-d3af-4769-b402-60663fda6d46/bridge/2.log" Feb 19 00:25:28 crc kubenswrapper[5109]: I0219 00:25:28.550293 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-cp5j9_52838cf3-d3af-4769-b402-60663fda6d46/sg-core/0.log" Feb 19 00:25:28 crc kubenswrapper[5109]: I0219 00:25:28.877006 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6bbc884464-llp85_53dee7d4-1233-4e93-b0e5-89b35ef19b4a/bridge/1.log" Feb 19 00:25:29 crc kubenswrapper[5109]: I0219 00:25:29.234805 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-6bbc884464-llp85_53dee7d4-1233-4e93-b0e5-89b35ef19b4a/sg-core/0.log" Feb 19 00:25:29 crc kubenswrapper[5109]: I0219 00:25:29.570554 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97_c2d1ace0-e174-4538-8038-bef4c5ba338e/bridge/2.log" Feb 19 00:25:29 crc kubenswrapper[5109]: I0219 00:25:29.849493 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-jcb97_c2d1ace0-e174-4538-8038-bef4c5ba338e/sg-core/0.log" Feb 19 00:25:30 crc kubenswrapper[5109]: I0219 00:25:30.163968 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2_b7a60315-59d0-4fd9-8a9e-4ecb38a8c926/bridge/1.log" Feb 19 00:25:30 crc kubenswrapper[5109]: I0219 00:25:30.433954 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-7b44777b78-nk8f2_b7a60315-59d0-4fd9-8a9e-4ecb38a8c926/sg-core/0.log" Feb 19 00:25:30 crc kubenswrapper[5109]: I0219 00:25:30.708175 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6_09a4f0cf-0742-4cf8-9687-1718b399b321/bridge/2.log" Feb 19 00:25:30 crc kubenswrapper[5109]: I0219 00:25:30.999372 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-vpzl6_09a4f0cf-0742-4cf8-9687-1718b399b321/sg-core/0.log" Feb 19 00:25:34 crc kubenswrapper[5109]: I0219 00:25:34.050379 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-64rz7_16f870f2-494d-439e-a72c-73446c158d32/operator/0.log" Feb 19 00:25:34 crc kubenswrapper[5109]: I0219 00:25:34.314241 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_01c6aa79-2623-4589-89eb-4e7170e2edd4/prometheus/0.log" Feb 19 00:25:34 crc kubenswrapper[5109]: I0219 00:25:34.660301 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_c9c257ed-3ada-4f89-acc4-d6ef40715e7e/elasticsearch/0.log" Feb 19 00:25:34 crc kubenswrapper[5109]: I0219 00:25:34.964146 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-689mq_b977cac1-63c2-4f60-b999-c3ca20fb5bc7/prometheus-webhook-snmp/0.log" Feb 19 00:25:35 crc kubenswrapper[5109]: I0219 00:25:35.281858 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_62739e79-bc0a-4ec9-a8fb-a667a70621e5/alertmanager/0.log" Feb 19 00:25:48 crc kubenswrapper[5109]: I0219 00:25:48.111590 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-cghjb_7836af4d-7c84-45ae-af6c-cd9f6edcc7fa/operator/0.log" Feb 19 00:25:48 crc kubenswrapper[5109]: I0219 00:25:48.290006 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:25:48 crc kubenswrapper[5109]: I0219 00:25:48.290102 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:25:51 crc kubenswrapper[5109]: I0219 00:25:51.435028 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-64rz7_16f870f2-494d-439e-a72c-73446c158d32/operator/0.log" Feb 19 00:25:51 crc kubenswrapper[5109]: I0219 00:25:51.712312 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_44531e3b-2fc7-438c-b280-716c81d528ea/qdr/0.log" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.139771 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524346-n8sqw"] Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.141759 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="registry-server" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.141843 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="registry-server" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.141905 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-collectd" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.141959 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-collectd" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142017 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="extract-utilities" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142070 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="extract-utilities" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142157 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-ceilometer" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142214 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-ceilometer" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142275 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="extract-content" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142327 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="extract-content" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142476 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-ceilometer" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142539 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="68a12bdc-4a71-4dbe-be9b-654f28bec15a" containerName="registry-server" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.142598 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="afa373db-1095-44d6-adae-43ff762418fb" containerName="smoketest-collectd" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.147944 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.151513 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.151754 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.153394 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-n8sqw"] Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.156522 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.249897 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnk7\" (UniqueName: \"kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7\") pod \"auto-csr-approver-29524346-n8sqw\" (UID: \"d4761ed2-dd5e-4f35-b221-ad9799b89004\") " pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.351806 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnk7\" (UniqueName: \"kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7\") pod \"auto-csr-approver-29524346-n8sqw\" (UID: \"d4761ed2-dd5e-4f35-b221-ad9799b89004\") " pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.389164 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnk7\" (UniqueName: \"kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7\") pod \"auto-csr-approver-29524346-n8sqw\" (UID: \"d4761ed2-dd5e-4f35-b221-ad9799b89004\") " pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.482212 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:00 crc kubenswrapper[5109]: I0219 00:26:00.836589 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-n8sqw"] Feb 19 00:26:01 crc kubenswrapper[5109]: I0219 00:26:01.167283 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" event={"ID":"d4761ed2-dd5e-4f35-b221-ad9799b89004","Type":"ContainerStarted","Data":"c828f6264150cbf6d2c13bb1341f026baad6c51aebf7f7d8c67ed4a9e538bba4"} Feb 19 00:26:02 crc kubenswrapper[5109]: I0219 00:26:02.181533 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" event={"ID":"d4761ed2-dd5e-4f35-b221-ad9799b89004","Type":"ContainerStarted","Data":"dbde8d602df55c1ebb2f3526c49fc13618590d2a9007163d3fca71014455613e"} Feb 19 00:26:03 crc kubenswrapper[5109]: I0219 00:26:03.193784 5109 generic.go:358] "Generic (PLEG): container finished" podID="d4761ed2-dd5e-4f35-b221-ad9799b89004" containerID="dbde8d602df55c1ebb2f3526c49fc13618590d2a9007163d3fca71014455613e" exitCode=0 Feb 19 00:26:03 crc kubenswrapper[5109]: I0219 00:26:03.193886 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" event={"ID":"d4761ed2-dd5e-4f35-b221-ad9799b89004","Type":"ContainerDied","Data":"dbde8d602df55c1ebb2f3526c49fc13618590d2a9007163d3fca71014455613e"} Feb 19 00:26:04 crc kubenswrapper[5109]: I0219 00:26:04.568732 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:04 crc kubenswrapper[5109]: I0219 00:26:04.622371 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjnk7\" (UniqueName: \"kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7\") pod \"d4761ed2-dd5e-4f35-b221-ad9799b89004\" (UID: \"d4761ed2-dd5e-4f35-b221-ad9799b89004\") " Feb 19 00:26:04 crc kubenswrapper[5109]: I0219 00:26:04.631877 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7" (OuterVolumeSpecName: "kube-api-access-mjnk7") pod "d4761ed2-dd5e-4f35-b221-ad9799b89004" (UID: "d4761ed2-dd5e-4f35-b221-ad9799b89004"). InnerVolumeSpecName "kube-api-access-mjnk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:26:04 crc kubenswrapper[5109]: I0219 00:26:04.724909 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjnk7\" (UniqueName: \"kubernetes.io/projected/d4761ed2-dd5e-4f35-b221-ad9799b89004-kube-api-access-mjnk7\") on node \"crc\" DevicePath \"\"" Feb 19 00:26:05 crc kubenswrapper[5109]: I0219 00:26:05.216095 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" Feb 19 00:26:05 crc kubenswrapper[5109]: I0219 00:26:05.216104 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524346-n8sqw" event={"ID":"d4761ed2-dd5e-4f35-b221-ad9799b89004","Type":"ContainerDied","Data":"c828f6264150cbf6d2c13bb1341f026baad6c51aebf7f7d8c67ed4a9e538bba4"} Feb 19 00:26:05 crc kubenswrapper[5109]: I0219 00:26:05.216175 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c828f6264150cbf6d2c13bb1341f026baad6c51aebf7f7d8c67ed4a9e538bba4" Feb 19 00:26:05 crc kubenswrapper[5109]: I0219 00:26:05.292604 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-dwzkg"] Feb 19 00:26:05 crc kubenswrapper[5109]: I0219 00:26:05.304624 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524340-dwzkg"] Feb 19 00:26:07 crc kubenswrapper[5109]: I0219 00:26:07.010851 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45cbaf31-5202-4d06-8328-9699984a859b" path="/var/lib/kubelet/pods/45cbaf31-5202-4d06-8328-9699984a859b/volumes" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.022614 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-5ktgb/must-gather-97d9b"] Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.023807 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d4761ed2-dd5e-4f35-b221-ad9799b89004" containerName="oc" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.023824 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4761ed2-dd5e-4f35-b221-ad9799b89004" containerName="oc" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.023952 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="d4761ed2-dd5e-4f35-b221-ad9799b89004" containerName="oc" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.100386 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.100646 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5ktgb/must-gather-97d9b"] Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.102522 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-5ktgb\"/\"openshift-service-ca.crt\"" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.106353 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-5ktgb\"/\"kube-root-ca.crt\"" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.165317 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lfnv\" (UniqueName: \"kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.165510 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.267484 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2lfnv\" (UniqueName: \"kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.267555 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.267936 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.290458 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.290558 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.299557 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lfnv\" (UniqueName: \"kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv\") pod \"must-gather-97d9b\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.430257 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:26:18 crc kubenswrapper[5109]: I0219 00:26:18.656114 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-5ktgb/must-gather-97d9b"] Feb 19 00:26:19 crc kubenswrapper[5109]: I0219 00:26:19.354210 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5ktgb/must-gather-97d9b" event={"ID":"001e9e13-338a-4a30-9586-ba0071f745fd","Type":"ContainerStarted","Data":"51172369979db41b8ff93135a442ab8a52b67db2782fcaa6ddde79df189efa79"} Feb 19 00:26:24 crc kubenswrapper[5109]: I0219 00:26:24.397160 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5ktgb/must-gather-97d9b" event={"ID":"001e9e13-338a-4a30-9586-ba0071f745fd","Type":"ContainerStarted","Data":"872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966"} Feb 19 00:26:24 crc kubenswrapper[5109]: I0219 00:26:24.397685 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5ktgb/must-gather-97d9b" event={"ID":"001e9e13-338a-4a30-9586-ba0071f745fd","Type":"ContainerStarted","Data":"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320"} Feb 19 00:26:24 crc kubenswrapper[5109]: I0219 00:26:24.414196 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-5ktgb/must-gather-97d9b" podStartSLOduration=1.436329047 podStartE2EDuration="6.414178782s" podCreationTimestamp="2026-02-19 00:26:18 +0000 UTC" firstStartedPulling="2026-02-19 00:26:18.66658483 +0000 UTC m=+1008.502824819" lastFinishedPulling="2026-02-19 00:26:23.644434565 +0000 UTC m=+1013.480674554" observedRunningTime="2026-02-19 00:26:24.413838582 +0000 UTC m=+1014.250078571" watchObservedRunningTime="2026-02-19 00:26:24.414178782 +0000 UTC m=+1014.250418771" Feb 19 00:26:37 crc kubenswrapper[5109]: I0219 00:26:37.315507 5109 scope.go:117] "RemoveContainer" containerID="32272c92ef5aa25088e59b1a36902d221b6586475e995d09e76ff6b37c455b74" Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.290152 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.291892 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.291985 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.293312 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.293424 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668" gracePeriod=600 Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.587815 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668" exitCode=0 Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.587910 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668"} Feb 19 00:26:48 crc kubenswrapper[5109]: I0219 00:26:48.588291 5109 scope.go:117] "RemoveContainer" containerID="1866f95804c252a234d5c7df5c1b71f3628f2d818e37a0353f0891500a2c933e" Feb 19 00:26:49 crc kubenswrapper[5109]: I0219 00:26:49.601091 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"02ce947e6ce5cf6117579557f049809d128808573cc503d03d9df931d899d624"} Feb 19 00:27:09 crc kubenswrapper[5109]: I0219 00:27:09.374595 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-qpwhk_dbf7d8d7-ef76-4af8-bc7e-91149dd703cf/control-plane-machine-set-operator/0.log" Feb 19 00:27:09 crc kubenswrapper[5109]: I0219 00:27:09.525100 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-vqhpb_070d6fda-192f-47cb-b873-192e072ff078/machine-api-operator/0.log" Feb 19 00:27:09 crc kubenswrapper[5109]: I0219 00:27:09.551386 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-vqhpb_070d6fda-192f-47cb-b873-192e072ff078/kube-rbac-proxy/0.log" Feb 19 00:27:22 crc kubenswrapper[5109]: I0219 00:27:22.034267 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-hc5g9_7be08e5e-17a1-4333-b9ae-89730a5b2da3/cert-manager-controller/0.log" Feb 19 00:27:22 crc kubenswrapper[5109]: I0219 00:27:22.111979 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-kzv2n_66f4d41b-0b12-427b-8882-f81b5d18b662/cert-manager-cainjector/0.log" Feb 19 00:27:22 crc kubenswrapper[5109]: I0219 00:27:22.213751 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-mgwcw_f419712c-11bd-425d-bcb7-e35869b34d49/cert-manager-webhook/0.log" Feb 19 00:27:36 crc kubenswrapper[5109]: I0219 00:27:36.157754 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7dwk9_a91dafae-307e-4ee3-965f-1534328cf242/prometheus-operator/0.log" Feb 19 00:27:36 crc kubenswrapper[5109]: I0219 00:27:36.314075 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8_30150c45-319a-48be-a756-530e75c42b2d/prometheus-operator-admission-webhook/0.log" Feb 19 00:27:36 crc kubenswrapper[5109]: I0219 00:27:36.372620 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c_b479eb3f-2359-4159-ad91-4f958b238af7/prometheus-operator-admission-webhook/0.log" Feb 19 00:27:36 crc kubenswrapper[5109]: I0219 00:27:36.486729 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mgfrq_a659594c-39ca-4fe7-b61b-bb074e4abc6d/operator/0.log" Feb 19 00:27:36 crc kubenswrapper[5109]: I0219 00:27:36.539704 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-kqlcr_b5bd03c0-434c-4adf-af86-1b5245b0a01e/perses-operator/0.log" Feb 19 00:27:51 crc kubenswrapper[5109]: I0219 00:27:51.712475 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/util/0.log" Feb 19 00:27:51 crc kubenswrapper[5109]: I0219 00:27:51.843319 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/pull/0.log" Feb 19 00:27:51 crc kubenswrapper[5109]: I0219 00:27:51.847365 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/util/0.log" Feb 19 00:27:51 crc kubenswrapper[5109]: I0219 00:27:51.875759 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.062899 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/extract/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.065897 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.072197 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1s7dth_47290e31-3e82-43f9-8568-c2a1d602f78c/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.222798 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.392263 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.392996 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.432809 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.560795 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/extract/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.567067 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.608739 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8ftlq6q_19f51d62-ca7e-40d4-9aa3-1a53dc412fea/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.735815 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.926448 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/util/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.940205 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/pull/0.log" Feb 19 00:27:52 crc kubenswrapper[5109]: I0219 00:27:52.972045 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/pull/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.062161 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/util/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.119511 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/pull/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.129869 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qtwkv_b4d279e6-ab61-4657-a567-b007a7d707f9/extract/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.227603 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/util/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.352211 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/util/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.362363 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/pull/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.411367 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/pull/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.536356 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/pull/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.544516 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/util/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.553780 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f089nnj6_74cea250-8141-48fe-91eb-54068d760685/extract/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.717480 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-utilities/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.857264 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-content/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.868109 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-utilities/0.log" Feb 19 00:27:53 crc kubenswrapper[5109]: I0219 00:27:53.875992 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.029198 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.061258 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.138835 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-mpr9j_2fcc53fd-7dcd-428b-9e6e-73a42e3c37bf/registry-server/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.216750 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.373472 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.377595 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.384557 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.538025 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.539356 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.583119 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-g5j87_d2efb82a-1039-47d1-9e51-102e80733bac/marketplace-operator/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.678730 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-xl49c_8e4f1385-5a2a-4098-b0c3-862f0656d43a/registry-server/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.783326 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.902626 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-utilities/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.914669 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-content/0.log" Feb 19 00:27:54 crc kubenswrapper[5109]: I0219 00:27:54.915151 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-content/0.log" Feb 19 00:27:55 crc kubenswrapper[5109]: I0219 00:27:55.093032 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-utilities/0.log" Feb 19 00:27:55 crc kubenswrapper[5109]: I0219 00:27:55.133132 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/extract-content/0.log" Feb 19 00:27:55 crc kubenswrapper[5109]: I0219 00:27:55.258846 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5jr5v_7015d02a-6aa4-4209-b318-dfc88ebe6d01/registry-server/0.log" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.146940 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524348-8nc7z"] Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.164352 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-8nc7z"] Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.164521 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.166763 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.166795 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.167024 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.294974 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpj8h\" (UniqueName: \"kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h\") pod \"auto-csr-approver-29524348-8nc7z\" (UID: \"bf375d92-9566-4412-b54c-70a567d7ac26\") " pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.396615 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpj8h\" (UniqueName: \"kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h\") pod \"auto-csr-approver-29524348-8nc7z\" (UID: \"bf375d92-9566-4412-b54c-70a567d7ac26\") " pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.416304 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpj8h\" (UniqueName: \"kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h\") pod \"auto-csr-approver-29524348-8nc7z\" (UID: \"bf375d92-9566-4412-b54c-70a567d7ac26\") " pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.488911 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:00 crc kubenswrapper[5109]: I0219 00:28:00.949409 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524348-8nc7z"] Feb 19 00:28:00 crc kubenswrapper[5109]: W0219 00:28:00.953189 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf375d92_9566_4412_b54c_70a567d7ac26.slice/crio-44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f WatchSource:0}: Error finding container 44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f: Status 404 returned error can't find the container with id 44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f Feb 19 00:28:01 crc kubenswrapper[5109]: I0219 00:28:01.177179 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" event={"ID":"bf375d92-9566-4412-b54c-70a567d7ac26","Type":"ContainerStarted","Data":"44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f"} Feb 19 00:28:03 crc kubenswrapper[5109]: I0219 00:28:03.197673 5109 generic.go:358] "Generic (PLEG): container finished" podID="bf375d92-9566-4412-b54c-70a567d7ac26" containerID="a7aaaefccdafe2dd9ce4801877b6a67971f669ddf52ab3fa09791eb5f94e10b1" exitCode=0 Feb 19 00:28:03 crc kubenswrapper[5109]: I0219 00:28:03.197880 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" event={"ID":"bf375d92-9566-4412-b54c-70a567d7ac26","Type":"ContainerDied","Data":"a7aaaefccdafe2dd9ce4801877b6a67971f669ddf52ab3fa09791eb5f94e10b1"} Feb 19 00:28:03 crc kubenswrapper[5109]: I0219 00:28:03.938506 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:03 crc kubenswrapper[5109]: I0219 00:28:03.949234 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:03 crc kubenswrapper[5109]: I0219 00:28:03.954867 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.051586 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.051764 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzkt6\" (UniqueName: \"kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.052143 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.153161 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.153229 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.153255 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wzkt6\" (UniqueName: \"kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.153782 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.153920 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.184045 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzkt6\" (UniqueName: \"kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6\") pod \"certified-operators-b6pb8\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.279677 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.543113 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.568073 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpj8h\" (UniqueName: \"kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h\") pod \"bf375d92-9566-4412-b54c-70a567d7ac26\" (UID: \"bf375d92-9566-4412-b54c-70a567d7ac26\") " Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.580812 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h" (OuterVolumeSpecName: "kube-api-access-kpj8h") pod "bf375d92-9566-4412-b54c-70a567d7ac26" (UID: "bf375d92-9566-4412-b54c-70a567d7ac26"). InnerVolumeSpecName "kube-api-access-kpj8h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.669376 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kpj8h\" (UniqueName: \"kubernetes.io/projected/bf375d92-9566-4412-b54c-70a567d7ac26-kube-api-access-kpj8h\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:04 crc kubenswrapper[5109]: I0219 00:28:04.809671 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.214974 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.214981 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524348-8nc7z" event={"ID":"bf375d92-9566-4412-b54c-70a567d7ac26","Type":"ContainerDied","Data":"44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f"} Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.215377 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ca9131d83de1451a09c41b271aa605662a43e0e69ebd4dad889009b5bce98f" Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.217207 5109 generic.go:358] "Generic (PLEG): container finished" podID="3c95f41d-737d-4324-a6af-96e23a766009" containerID="f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5" exitCode=0 Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.217317 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerDied","Data":"f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5"} Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.217365 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerStarted","Data":"e3865f69be88e5a0a5c5732da8e2b87be55b0e1d30768ed2dd8778cbebd7cb3b"} Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.607898 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-j7jg7"] Feb 19 00:28:05 crc kubenswrapper[5109]: I0219 00:28:05.615839 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524342-j7jg7"] Feb 19 00:28:06 crc kubenswrapper[5109]: I0219 00:28:06.226928 5109 generic.go:358] "Generic (PLEG): container finished" podID="3c95f41d-737d-4324-a6af-96e23a766009" containerID="98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6" exitCode=0 Feb 19 00:28:06 crc kubenswrapper[5109]: I0219 00:28:06.227014 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerDied","Data":"98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6"} Feb 19 00:28:07 crc kubenswrapper[5109]: I0219 00:28:07.001867 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14a5ab9a-a49c-43fd-855c-a409b8c60e2c" path="/var/lib/kubelet/pods/14a5ab9a-a49c-43fd-855c-a409b8c60e2c/volumes" Feb 19 00:28:07 crc kubenswrapper[5109]: I0219 00:28:07.236318 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerStarted","Data":"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846"} Feb 19 00:28:07 crc kubenswrapper[5109]: I0219 00:28:07.262516 5109 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b6pb8" podStartSLOduration=3.635028819 podStartE2EDuration="4.262496048s" podCreationTimestamp="2026-02-19 00:28:03 +0000 UTC" firstStartedPulling="2026-02-19 00:28:05.218089612 +0000 UTC m=+1115.054329601" lastFinishedPulling="2026-02-19 00:28:05.845556841 +0000 UTC m=+1115.681796830" observedRunningTime="2026-02-19 00:28:07.257886649 +0000 UTC m=+1117.094126688" watchObservedRunningTime="2026-02-19 00:28:07.262496048 +0000 UTC m=+1117.098736037" Feb 19 00:28:08 crc kubenswrapper[5109]: I0219 00:28:08.437693 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cff89555b-4q8m8_30150c45-319a-48be-a756-530e75c42b2d/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:08 crc kubenswrapper[5109]: I0219 00:28:08.469090 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5cff89555b-p2r7c_b479eb3f-2359-4159-ad91-4f958b238af7/prometheus-operator-admission-webhook/0.log" Feb 19 00:28:08 crc kubenswrapper[5109]: I0219 00:28:08.497928 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-7dwk9_a91dafae-307e-4ee3-965f-1534328cf242/prometheus-operator/0.log" Feb 19 00:28:08 crc kubenswrapper[5109]: I0219 00:28:08.587498 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-kqlcr_b5bd03c0-434c-4adf-af86-1b5245b0a01e/perses-operator/0.log" Feb 19 00:28:08 crc kubenswrapper[5109]: I0219 00:28:08.589710 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-mgfrq_a659594c-39ca-4fe7-b61b-bb074e4abc6d/operator/0.log" Feb 19 00:28:14 crc kubenswrapper[5109]: I0219 00:28:14.280614 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:14 crc kubenswrapper[5109]: I0219 00:28:14.281388 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:14 crc kubenswrapper[5109]: I0219 00:28:14.348351 5109 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:14 crc kubenswrapper[5109]: I0219 00:28:14.424815 5109 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:15 crc kubenswrapper[5109]: I0219 00:28:15.526136 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.320146 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b6pb8" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="registry-server" containerID="cri-o://4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846" gracePeriod=2 Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.828811 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.979008 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content\") pod \"3c95f41d-737d-4324-a6af-96e23a766009\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.979168 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities\") pod \"3c95f41d-737d-4324-a6af-96e23a766009\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.979220 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzkt6\" (UniqueName: \"kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6\") pod \"3c95f41d-737d-4324-a6af-96e23a766009\" (UID: \"3c95f41d-737d-4324-a6af-96e23a766009\") " Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.981120 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities" (OuterVolumeSpecName: "utilities") pod "3c95f41d-737d-4324-a6af-96e23a766009" (UID: "3c95f41d-737d-4324-a6af-96e23a766009"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:16 crc kubenswrapper[5109]: I0219 00:28:16.986931 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6" (OuterVolumeSpecName: "kube-api-access-wzkt6") pod "3c95f41d-737d-4324-a6af-96e23a766009" (UID: "3c95f41d-737d-4324-a6af-96e23a766009"). InnerVolumeSpecName "kube-api-access-wzkt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.021978 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c95f41d-737d-4324-a6af-96e23a766009" (UID: "3c95f41d-737d-4324-a6af-96e23a766009"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.081177 5109 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-utilities\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.081227 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wzkt6\" (UniqueName: \"kubernetes.io/projected/3c95f41d-737d-4324-a6af-96e23a766009-kube-api-access-wzkt6\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.081240 5109 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c95f41d-737d-4324-a6af-96e23a766009-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.337004 5109 generic.go:358] "Generic (PLEG): container finished" podID="3c95f41d-737d-4324-a6af-96e23a766009" containerID="4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846" exitCode=0 Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.337514 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerDied","Data":"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846"} Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.337561 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b6pb8" event={"ID":"3c95f41d-737d-4324-a6af-96e23a766009","Type":"ContainerDied","Data":"e3865f69be88e5a0a5c5732da8e2b87be55b0e1d30768ed2dd8778cbebd7cb3b"} Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.337593 5109 scope.go:117] "RemoveContainer" containerID="4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.337946 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b6pb8" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.400905 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.414009 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b6pb8"] Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.415198 5109 scope.go:117] "RemoveContainer" containerID="98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.439109 5109 scope.go:117] "RemoveContainer" containerID="f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.476326 5109 scope.go:117] "RemoveContainer" containerID="4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846" Feb 19 00:28:17 crc kubenswrapper[5109]: E0219 00:28:17.476809 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846\": container with ID starting with 4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846 not found: ID does not exist" containerID="4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.476931 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846"} err="failed to get container status \"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846\": rpc error: code = NotFound desc = could not find container \"4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846\": container with ID starting with 4bac23a21fbf65fb6b4f2f312e9f780f9ff9a1b104aa556c799d552cd54e0846 not found: ID does not exist" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.476987 5109 scope.go:117] "RemoveContainer" containerID="98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6" Feb 19 00:28:17 crc kubenswrapper[5109]: E0219 00:28:17.477467 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6\": container with ID starting with 98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6 not found: ID does not exist" containerID="98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.477498 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6"} err="failed to get container status \"98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6\": rpc error: code = NotFound desc = could not find container \"98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6\": container with ID starting with 98ab387054f1bd467d08401a92eb12d64469c10e3cabb374e8cc980a42f099d6 not found: ID does not exist" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.477520 5109 scope.go:117] "RemoveContainer" containerID="f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5" Feb 19 00:28:17 crc kubenswrapper[5109]: E0219 00:28:17.477861 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5\": container with ID starting with f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5 not found: ID does not exist" containerID="f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5" Feb 19 00:28:17 crc kubenswrapper[5109]: I0219 00:28:17.477926 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5"} err="failed to get container status \"f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5\": rpc error: code = NotFound desc = could not find container \"f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5\": container with ID starting with f4d4a59091d9b4423c04c17ffcf02e6a302f5141637a76472ea4ecfd88be68d5 not found: ID does not exist" Feb 19 00:28:18 crc kubenswrapper[5109]: I0219 00:28:18.999667 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c95f41d-737d-4324-a6af-96e23a766009" path="/var/lib/kubelet/pods/3c95f41d-737d-4324-a6af-96e23a766009/volumes" Feb 19 00:28:37 crc kubenswrapper[5109]: I0219 00:28:37.430326 5109 scope.go:117] "RemoveContainer" containerID="5f58160f09a5b90dba930a51dfc3c90c52d0dff61c933b5eb87d03ab962a25f6" Feb 19 00:28:48 crc kubenswrapper[5109]: I0219 00:28:48.289775 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:28:48 crc kubenswrapper[5109]: I0219 00:28:48.290403 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:28:49 crc kubenswrapper[5109]: I0219 00:28:49.661377 5109 generic.go:358] "Generic (PLEG): container finished" podID="001e9e13-338a-4a30-9586-ba0071f745fd" containerID="4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320" exitCode=0 Feb 19 00:28:49 crc kubenswrapper[5109]: I0219 00:28:49.661439 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-5ktgb/must-gather-97d9b" event={"ID":"001e9e13-338a-4a30-9586-ba0071f745fd","Type":"ContainerDied","Data":"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320"} Feb 19 00:28:49 crc kubenswrapper[5109]: I0219 00:28:49.662196 5109 scope.go:117] "RemoveContainer" containerID="4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320" Feb 19 00:28:49 crc kubenswrapper[5109]: I0219 00:28:49.788311 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5ktgb_must-gather-97d9b_001e9e13-338a-4a30-9586-ba0071f745fd/gather/0.log" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.088998 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-5ktgb/must-gather-97d9b"] Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.089972 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-5ktgb/must-gather-97d9b" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="copy" containerID="cri-o://872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966" gracePeriod=2 Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.092086 5109 status_manager.go:895] "Failed to get status for pod" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" pod="openshift-must-gather-5ktgb/must-gather-97d9b" err="pods \"must-gather-97d9b\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-5ktgb\": no relationship found between node 'crc' and this object" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.099659 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-5ktgb/must-gather-97d9b"] Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.443459 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5ktgb_must-gather-97d9b_001e9e13-338a-4a30-9586-ba0071f745fd/copy/0.log" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.444157 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.445451 5109 status_manager.go:895] "Failed to get status for pod" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" pod="openshift-must-gather-5ktgb/must-gather-97d9b" err="pods \"must-gather-97d9b\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-5ktgb\": no relationship found between node 'crc' and this object" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.458097 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output\") pod \"001e9e13-338a-4a30-9586-ba0071f745fd\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.458330 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lfnv\" (UniqueName: \"kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv\") pod \"001e9e13-338a-4a30-9586-ba0071f745fd\" (UID: \"001e9e13-338a-4a30-9586-ba0071f745fd\") " Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.463969 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv" (OuterVolumeSpecName: "kube-api-access-2lfnv") pod "001e9e13-338a-4a30-9586-ba0071f745fd" (UID: "001e9e13-338a-4a30-9586-ba0071f745fd"). InnerVolumeSpecName "kube-api-access-2lfnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.505392 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "001e9e13-338a-4a30-9586-ba0071f745fd" (UID: "001e9e13-338a-4a30-9586-ba0071f745fd"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.560945 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2lfnv\" (UniqueName: \"kubernetes.io/projected/001e9e13-338a-4a30-9586-ba0071f745fd-kube-api-access-2lfnv\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.561001 5109 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/001e9e13-338a-4a30-9586-ba0071f745fd-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.728033 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-5ktgb_must-gather-97d9b_001e9e13-338a-4a30-9586-ba0071f745fd/copy/0.log" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.728370 5109 generic.go:358] "Generic (PLEG): container finished" podID="001e9e13-338a-4a30-9586-ba0071f745fd" containerID="872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966" exitCode=143 Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.728421 5109 scope.go:117] "RemoveContainer" containerID="872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.728445 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-5ktgb/must-gather-97d9b" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.731433 5109 status_manager.go:895] "Failed to get status for pod" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" pod="openshift-must-gather-5ktgb/must-gather-97d9b" err="pods \"must-gather-97d9b\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-5ktgb\": no relationship found between node 'crc' and this object" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.743496 5109 status_manager.go:895] "Failed to get status for pod" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" pod="openshift-must-gather-5ktgb/must-gather-97d9b" err="pods \"must-gather-97d9b\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-5ktgb\": no relationship found between node 'crc' and this object" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.750725 5109 scope.go:117] "RemoveContainer" containerID="4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.835153 5109 scope.go:117] "RemoveContainer" containerID="872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966" Feb 19 00:28:56 crc kubenswrapper[5109]: E0219 00:28:56.835967 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966\": container with ID starting with 872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966 not found: ID does not exist" containerID="872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.836008 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966"} err="failed to get container status \"872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966\": rpc error: code = NotFound desc = could not find container \"872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966\": container with ID starting with 872c5f5f27480ca3d3ccf0f8b19654c42cdf971c22fb3acb7cdbbb3b54b5e966 not found: ID does not exist" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.836026 5109 scope.go:117] "RemoveContainer" containerID="4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320" Feb 19 00:28:56 crc kubenswrapper[5109]: E0219 00:28:56.836417 5109 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320\": container with ID starting with 4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320 not found: ID does not exist" containerID="4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.836486 5109 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320"} err="failed to get container status \"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320\": rpc error: code = NotFound desc = could not find container \"4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320\": container with ID starting with 4588cb18740bbc5eba406350719146e391601501807e04081276d899ddb4a320 not found: ID does not exist" Feb 19 00:28:56 crc kubenswrapper[5109]: I0219 00:28:56.999231 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" path="/var/lib/kubelet/pods/001e9e13-338a-4a30-9586-ba0071f745fd/volumes" Feb 19 00:29:18 crc kubenswrapper[5109]: I0219 00:29:18.289534 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:29:18 crc kubenswrapper[5109]: I0219 00:29:18.290085 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:29:31 crc kubenswrapper[5109]: I0219 00:29:31.530060 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:29:31 crc kubenswrapper[5109]: I0219 00:29:31.532912 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-ctz69_9d3c36ec-d151-4cb3-8bcb-931c2665a1e7/kube-multus/0.log" Feb 19 00:29:31 crc kubenswrapper[5109]: I0219 00:29:31.545155 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:29:31 crc kubenswrapper[5109]: I0219 00:29:31.545320 5109 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 19 00:29:48 crc kubenswrapper[5109]: I0219 00:29:48.289971 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:29:48 crc kubenswrapper[5109]: I0219 00:29:48.290759 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:29:48 crc kubenswrapper[5109]: I0219 00:29:48.290846 5109 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" Feb 19 00:29:48 crc kubenswrapper[5109]: I0219 00:29:48.292028 5109 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02ce947e6ce5cf6117579557f049809d128808573cc503d03d9df931d899d624"} pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 19 00:29:48 crc kubenswrapper[5109]: I0219 00:29:48.292165 5109 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" containerID="cri-o://02ce947e6ce5cf6117579557f049809d128808573cc503d03d9df931d899d624" gracePeriod=600 Feb 19 00:29:49 crc kubenswrapper[5109]: I0219 00:29:49.182261 5109 generic.go:358] "Generic (PLEG): container finished" podID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerID="02ce947e6ce5cf6117579557f049809d128808573cc503d03d9df931d899d624" exitCode=0 Feb 19 00:29:49 crc kubenswrapper[5109]: I0219 00:29:49.182347 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerDied","Data":"02ce947e6ce5cf6117579557f049809d128808573cc503d03d9df931d899d624"} Feb 19 00:29:49 crc kubenswrapper[5109]: I0219 00:29:49.182908 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" event={"ID":"3dd0092b-65e0-496b-aad5-33d7ca9ca9d6","Type":"ContainerStarted","Data":"38c9f81465ff0ff0cd767a4462afaa3da2c7d3e1952f022b5695de929d6842ea"} Feb 19 00:29:49 crc kubenswrapper[5109]: I0219 00:29:49.182935 5109 scope.go:117] "RemoveContainer" containerID="366c890b410045dd1bd67531cc9769dfe02e13f4d55248ebad99c0b955599668" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.156402 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j"] Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158774 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bf375d92-9566-4412-b54c-70a567d7ac26" containerName="oc" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158811 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf375d92-9566-4412-b54c-70a567d7ac26" containerName="oc" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158852 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158864 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158904 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158918 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158935 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="extract-utilities" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158948 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="extract-utilities" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158969 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="extract-content" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.158981 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="extract-content" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159028 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="registry-server" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159041 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="registry-server" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159292 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="bf375d92-9566-4412-b54c-70a567d7ac26" containerName="oc" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159346 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="gather" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159373 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="3c95f41d-737d-4324-a6af-96e23a766009" containerName="registry-server" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.159402 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="001e9e13-338a-4a30-9586-ba0071f745fd" containerName="copy" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.169198 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524350-v928f"] Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.170752 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.173909 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.174740 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j"] Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.174901 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.175179 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.175360 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-v928f"] Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.177258 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.177473 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.177669 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.253624 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft65w\" (UniqueName: \"kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.254002 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.254063 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.254087 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5dtc\" (UniqueName: \"kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc\") pod \"auto-csr-approver-29524350-v928f\" (UID: \"b6bee126-6d17-4f52-bb9f-9e09602bdde8\") " pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.355287 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.355365 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.355538 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b5dtc\" (UniqueName: \"kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc\") pod \"auto-csr-approver-29524350-v928f\" (UID: \"b6bee126-6d17-4f52-bb9f-9e09602bdde8\") " pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.355656 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ft65w\" (UniqueName: \"kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.356548 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.374878 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.377883 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5dtc\" (UniqueName: \"kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc\") pod \"auto-csr-approver-29524350-v928f\" (UID: \"b6bee126-6d17-4f52-bb9f-9e09602bdde8\") " pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.379261 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft65w\" (UniqueName: \"kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w\") pod \"collect-profiles-29524350-8fn2j\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.495238 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.505800 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.901569 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j"] Feb 19 00:30:00 crc kubenswrapper[5109]: I0219 00:30:00.947653 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524350-v928f"] Feb 19 00:30:00 crc kubenswrapper[5109]: W0219 00:30:00.952077 5109 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6bee126_6d17_4f52_bb9f_9e09602bdde8.slice/crio-bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35 WatchSource:0}: Error finding container bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35: Status 404 returned error can't find the container with id bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35 Feb 19 00:30:01 crc kubenswrapper[5109]: I0219 00:30:01.289374 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" event={"ID":"be5b4953-b9ec-448f-9725-ab0a0d49b893","Type":"ContainerStarted","Data":"8c54cc297a5e62406517a33b0c1db17d92a8bf83488732f211c3173118ac7893"} Feb 19 00:30:01 crc kubenswrapper[5109]: I0219 00:30:01.291019 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-v928f" event={"ID":"b6bee126-6d17-4f52-bb9f-9e09602bdde8","Type":"ContainerStarted","Data":"bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35"} Feb 19 00:30:02 crc kubenswrapper[5109]: I0219 00:30:02.304810 5109 generic.go:358] "Generic (PLEG): container finished" podID="be5b4953-b9ec-448f-9725-ab0a0d49b893" containerID="d16a32d013d8f488738d63dacec9985eb8309d170703cdffc83b3724be308d31" exitCode=0 Feb 19 00:30:02 crc kubenswrapper[5109]: I0219 00:30:02.304930 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" event={"ID":"be5b4953-b9ec-448f-9725-ab0a0d49b893","Type":"ContainerDied","Data":"d16a32d013d8f488738d63dacec9985eb8309d170703cdffc83b3724be308d31"} Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.596328 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.701137 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume\") pod \"be5b4953-b9ec-448f-9725-ab0a0d49b893\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.701203 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft65w\" (UniqueName: \"kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w\") pod \"be5b4953-b9ec-448f-9725-ab0a0d49b893\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.701249 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume\") pod \"be5b4953-b9ec-448f-9725-ab0a0d49b893\" (UID: \"be5b4953-b9ec-448f-9725-ab0a0d49b893\") " Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.702126 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume" (OuterVolumeSpecName: "config-volume") pod "be5b4953-b9ec-448f-9725-ab0a0d49b893" (UID: "be5b4953-b9ec-448f-9725-ab0a0d49b893"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.707091 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w" (OuterVolumeSpecName: "kube-api-access-ft65w") pod "be5b4953-b9ec-448f-9725-ab0a0d49b893" (UID: "be5b4953-b9ec-448f-9725-ab0a0d49b893"). InnerVolumeSpecName "kube-api-access-ft65w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.710234 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "be5b4953-b9ec-448f-9725-ab0a0d49b893" (UID: "be5b4953-b9ec-448f-9725-ab0a0d49b893"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.802484 5109 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be5b4953-b9ec-448f-9725-ab0a0d49b893-config-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.802523 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ft65w\" (UniqueName: \"kubernetes.io/projected/be5b4953-b9ec-448f-9725-ab0a0d49b893-kube-api-access-ft65w\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:03 crc kubenswrapper[5109]: I0219 00:30:03.802533 5109 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/be5b4953-b9ec-448f-9725-ab0a0d49b893-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:04 crc kubenswrapper[5109]: I0219 00:30:04.332239 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" Feb 19 00:30:04 crc kubenswrapper[5109]: I0219 00:30:04.332240 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29524350-8fn2j" event={"ID":"be5b4953-b9ec-448f-9725-ab0a0d49b893","Type":"ContainerDied","Data":"8c54cc297a5e62406517a33b0c1db17d92a8bf83488732f211c3173118ac7893"} Feb 19 00:30:04 crc kubenswrapper[5109]: I0219 00:30:04.332671 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c54cc297a5e62406517a33b0c1db17d92a8bf83488732f211c3173118ac7893" Feb 19 00:30:04 crc kubenswrapper[5109]: I0219 00:30:04.334543 5109 generic.go:358] "Generic (PLEG): container finished" podID="b6bee126-6d17-4f52-bb9f-9e09602bdde8" containerID="ed76d797f56b46d10d19669f7781c167ccc6f4ef4fec0a34f288006ffe4db43d" exitCode=0 Feb 19 00:30:04 crc kubenswrapper[5109]: I0219 00:30:04.334608 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-v928f" event={"ID":"b6bee126-6d17-4f52-bb9f-9e09602bdde8","Type":"ContainerDied","Data":"ed76d797f56b46d10d19669f7781c167ccc6f4ef4fec0a34f288006ffe4db43d"} Feb 19 00:30:05 crc kubenswrapper[5109]: I0219 00:30:05.627201 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:05 crc kubenswrapper[5109]: I0219 00:30:05.734029 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5dtc\" (UniqueName: \"kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc\") pod \"b6bee126-6d17-4f52-bb9f-9e09602bdde8\" (UID: \"b6bee126-6d17-4f52-bb9f-9e09602bdde8\") " Feb 19 00:30:05 crc kubenswrapper[5109]: I0219 00:30:05.741117 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc" (OuterVolumeSpecName: "kube-api-access-b5dtc") pod "b6bee126-6d17-4f52-bb9f-9e09602bdde8" (UID: "b6bee126-6d17-4f52-bb9f-9e09602bdde8"). InnerVolumeSpecName "kube-api-access-b5dtc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:30:05 crc kubenswrapper[5109]: I0219 00:30:05.836876 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b5dtc\" (UniqueName: \"kubernetes.io/projected/b6bee126-6d17-4f52-bb9f-9e09602bdde8-kube-api-access-b5dtc\") on node \"crc\" DevicePath \"\"" Feb 19 00:30:06 crc kubenswrapper[5109]: I0219 00:30:06.355441 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524350-v928f" Feb 19 00:30:06 crc kubenswrapper[5109]: I0219 00:30:06.355491 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524350-v928f" event={"ID":"b6bee126-6d17-4f52-bb9f-9e09602bdde8","Type":"ContainerDied","Data":"bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35"} Feb 19 00:30:06 crc kubenswrapper[5109]: I0219 00:30:06.355550 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdd8cb84a69a3e117fcc13011a43a2294d8ca56097ffcd5ebcbc20f0a80a8d35" Feb 19 00:30:06 crc kubenswrapper[5109]: I0219 00:30:06.711405 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-d4wqv"] Feb 19 00:30:06 crc kubenswrapper[5109]: I0219 00:30:06.722365 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524344-d4wqv"] Feb 19 00:30:07 crc kubenswrapper[5109]: I0219 00:30:07.015154 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f698e68-6c95-41c2-a911-b81382b3b111" path="/var/lib/kubelet/pods/5f698e68-6c95-41c2-a911-b81382b3b111/volumes" Feb 19 00:30:37 crc kubenswrapper[5109]: I0219 00:30:37.659598 5109 scope.go:117] "RemoveContainer" containerID="2ce547874796683ed01486e087e55964c7e268e5d7757598f769079ce90f1732" Feb 19 00:31:48 crc kubenswrapper[5109]: I0219 00:31:48.289246 5109 patch_prober.go:28] interesting pod/machine-config-daemon-ntpdt container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 19 00:31:48 crc kubenswrapper[5109]: I0219 00:31:48.289864 5109 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-ntpdt" podUID="3dd0092b-65e0-496b-aad5-33d7ca9ca9d6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.140998 5109 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29524352-q2fhx"] Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142192 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b6bee126-6d17-4f52-bb9f-9e09602bdde8" containerName="oc" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142208 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bee126-6d17-4f52-bb9f-9e09602bdde8" containerName="oc" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142235 5109 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be5b4953-b9ec-448f-9725-ab0a0d49b893" containerName="collect-profiles" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142242 5109 state_mem.go:107] "Deleted CPUSet assignment" podUID="be5b4953-b9ec-448f-9725-ab0a0d49b893" containerName="collect-profiles" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142414 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="b6bee126-6d17-4f52-bb9f-9e09602bdde8" containerName="oc" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.142429 5109 memory_manager.go:356] "RemoveStaleState removing state" podUID="be5b4953-b9ec-448f-9725-ab0a0d49b893" containerName="collect-profiles" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.148614 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.152081 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.152273 5109 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.153043 5109 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-djqtz\"" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.155302 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-q2fhx"] Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.195899 5109 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54\") pod \"auto-csr-approver-29524352-q2fhx\" (UID: \"2b96843d-c45a-493d-a9f0-648f2df2f5cc\") " pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.297381 5109 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54\") pod \"auto-csr-approver-29524352-q2fhx\" (UID: \"2b96843d-c45a-493d-a9f0-648f2df2f5cc\") " pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.336186 5109 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54\") pod \"auto-csr-approver-29524352-q2fhx\" (UID: \"2b96843d-c45a-493d-a9f0-648f2df2f5cc\") " pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.471900 5109 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.742053 5109 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29524352-q2fhx"] Feb 19 00:32:00 crc kubenswrapper[5109]: I0219 00:32:00.751656 5109 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 19 00:32:01 crc kubenswrapper[5109]: I0219 00:32:01.458898 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" event={"ID":"2b96843d-c45a-493d-a9f0-648f2df2f5cc","Type":"ContainerStarted","Data":"49073581b170625b380d748cb156bcaffc0e1a1af3a0bb7ddf5cff3e3e0eb315"} Feb 19 00:32:02 crc kubenswrapper[5109]: I0219 00:32:02.469242 5109 generic.go:358] "Generic (PLEG): container finished" podID="2b96843d-c45a-493d-a9f0-648f2df2f5cc" containerID="887bb80f19f24ecb17823c385c5c0c4b34f5a6e1f0e75f8781c0ee5007d4cad4" exitCode=0 Feb 19 00:32:02 crc kubenswrapper[5109]: I0219 00:32:02.469386 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" event={"ID":"2b96843d-c45a-493d-a9f0-648f2df2f5cc","Type":"ContainerDied","Data":"887bb80f19f24ecb17823c385c5c0c4b34f5a6e1f0e75f8781c0ee5007d4cad4"} Feb 19 00:32:03 crc kubenswrapper[5109]: I0219 00:32:03.730237 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:03 crc kubenswrapper[5109]: I0219 00:32:03.899552 5109 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54\") pod \"2b96843d-c45a-493d-a9f0-648f2df2f5cc\" (UID: \"2b96843d-c45a-493d-a9f0-648f2df2f5cc\") " Feb 19 00:32:03 crc kubenswrapper[5109]: I0219 00:32:03.908072 5109 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54" (OuterVolumeSpecName: "kube-api-access-lcb54") pod "2b96843d-c45a-493d-a9f0-648f2df2f5cc" (UID: "2b96843d-c45a-493d-a9f0-648f2df2f5cc"). InnerVolumeSpecName "kube-api-access-lcb54". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.002352 5109 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lcb54\" (UniqueName: \"kubernetes.io/projected/2b96843d-c45a-493d-a9f0-648f2df2f5cc-kube-api-access-lcb54\") on node \"crc\" DevicePath \"\"" Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.487030 5109 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.487050 5109 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29524352-q2fhx" event={"ID":"2b96843d-c45a-493d-a9f0-648f2df2f5cc","Type":"ContainerDied","Data":"49073581b170625b380d748cb156bcaffc0e1a1af3a0bb7ddf5cff3e3e0eb315"} Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.487444 5109 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49073581b170625b380d748cb156bcaffc0e1a1af3a0bb7ddf5cff3e3e0eb315" Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.805970 5109 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-n8sqw"] Feb 19 00:32:04 crc kubenswrapper[5109]: I0219 00:32:04.811163 5109 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29524346-n8sqw"] Feb 19 00:32:05 crc kubenswrapper[5109]: I0219 00:32:05.000672 5109 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4761ed2-dd5e-4f35-b221-ad9799b89004" path="/var/lib/kubelet/pods/d4761ed2-dd5e-4f35-b221-ad9799b89004/volumes"